00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 230 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3732 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.000 Started by timer 00:00:00.145 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.146 The recommended git tool is: git 00:00:00.146 using credential 00000000-0000-0000-0000-000000000002 00:00:00.147 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.182 Fetching changes from the remote Git repository 00:00:00.184 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.220 Using shallow fetch with depth 1 00:00:00.220 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.220 > git --version # timeout=10 00:00:00.253 > git --version # 'git version 2.39.2' 00:00:00.253 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.273 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.273 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.843 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.855 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.868 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.868 > git config core.sparsecheckout # timeout=10 00:00:06.880 > git read-tree -mu HEAD # timeout=10 00:00:06.895 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.913 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.913 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.017 [Pipeline] Start of Pipeline 00:00:07.027 [Pipeline] library 00:00:07.029 Loading library shm_lib@master 00:00:07.029 Library shm_lib@master is cached. Copying from home. 00:00:07.042 [Pipeline] node 00:00:07.060 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.061 [Pipeline] { 00:00:07.070 [Pipeline] catchError 00:00:07.071 [Pipeline] { 00:00:07.082 [Pipeline] wrap 00:00:07.088 [Pipeline] { 00:00:07.093 [Pipeline] stage 00:00:07.094 [Pipeline] { (Prologue) 00:00:07.106 [Pipeline] echo 00:00:07.107 Node: VM-host-SM9 00:00:07.111 [Pipeline] cleanWs 00:00:07.119 [WS-CLEANUP] Deleting project workspace... 00:00:07.119 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.125 [WS-CLEANUP] done 00:00:07.312 [Pipeline] setCustomBuildProperty 00:00:07.377 [Pipeline] httpRequest 00:00:07.744 [Pipeline] echo 00:00:07.746 Sorcerer 10.211.164.20 is alive 00:00:07.752 [Pipeline] retry 00:00:07.753 [Pipeline] { 00:00:07.761 [Pipeline] httpRequest 00:00:07.842 HttpMethod: GET 00:00:07.843 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.843 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.862 Response Code: HTTP/1.1 200 OK 00:00:07.862 Success: Status code 200 is in the accepted range: 200,404 00:00:07.863 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.244 [Pipeline] } 00:00:26.261 [Pipeline] // retry 00:00:26.268 [Pipeline] sh 00:00:26.549 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.565 [Pipeline] httpRequest 00:00:26.937 [Pipeline] echo 00:00:26.939 Sorcerer 10.211.164.20 is alive 00:00:26.949 [Pipeline] retry 00:00:26.951 [Pipeline] { 00:00:26.965 [Pipeline] httpRequest 00:00:26.969 HttpMethod: GET 00:00:26.970 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.971 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.985 Response Code: HTTP/1.1 200 OK 00:00:26.986 Success: Status code 200 is in the accepted range: 200,404 00:00:26.986 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:52.704 [Pipeline] } 00:00:52.722 [Pipeline] // retry 00:00:52.730 [Pipeline] sh 00:00:53.012 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:55.560 [Pipeline] sh 00:00:55.841 + git -C spdk log --oneline -n5 00:00:55.841 b18e1bd62 version: v24.09.1-pre 00:00:55.841 19524ad45 version: v24.09 00:00:55.841 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:55.841 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:55.841 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:55.861 [Pipeline] withCredentials 00:00:55.871 > git --version # timeout=10 00:00:55.885 > git --version # 'git version 2.39.2' 00:00:55.901 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:55.904 [Pipeline] { 00:00:55.913 [Pipeline] retry 00:00:55.915 [Pipeline] { 00:00:55.931 [Pipeline] sh 00:00:56.212 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:56.223 [Pipeline] } 00:00:56.241 [Pipeline] // retry 00:00:56.246 [Pipeline] } 00:00:56.262 [Pipeline] // withCredentials 00:00:56.272 [Pipeline] httpRequest 00:00:56.658 [Pipeline] echo 00:00:56.660 Sorcerer 10.211.164.20 is alive 00:00:56.670 [Pipeline] retry 00:00:56.672 [Pipeline] { 00:00:56.686 [Pipeline] httpRequest 00:00:56.691 HttpMethod: GET 00:00:56.691 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:56.692 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:56.700 Response Code: HTTP/1.1 200 OK 00:00:56.700 Success: Status code 200 is in the accepted range: 200,404 00:00:56.701 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.790 [Pipeline] } 00:01:26.807 [Pipeline] // retry 00:01:26.815 [Pipeline] sh 00:01:27.095 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.492 [Pipeline] sh 00:01:28.772 + git -C dpdk log --oneline -n5 00:01:28.772 caf0f5d395 version: 22.11.4 00:01:28.772 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:28.772 dc9c799c7d vhost: fix missing spinlock unlock 00:01:28.772 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:28.772 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:28.790 [Pipeline] writeFile 00:01:28.804 [Pipeline] sh 00:01:29.086 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:29.099 [Pipeline] sh 00:01:29.381 + cat autorun-spdk.conf 00:01:29.381 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.381 SPDK_TEST_NVMF=1 00:01:29.381 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.381 SPDK_TEST_URING=1 00:01:29.381 SPDK_TEST_USDT=1 00:01:29.381 SPDK_RUN_UBSAN=1 00:01:29.381 NET_TYPE=virt 00:01:29.381 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.381 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:29.381 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.387 RUN_NIGHTLY=1 00:01:29.389 [Pipeline] } 00:01:29.400 [Pipeline] // stage 00:01:29.413 [Pipeline] stage 00:01:29.415 [Pipeline] { (Run VM) 00:01:29.427 [Pipeline] sh 00:01:29.705 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.705 + echo 'Start stage prepare_nvme.sh' 00:01:29.705 Start stage prepare_nvme.sh 00:01:29.705 + [[ -n 5 ]] 00:01:29.705 + disk_prefix=ex5 00:01:29.705 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:29.705 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:29.705 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:29.705 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.705 ++ SPDK_TEST_NVMF=1 00:01:29.705 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.705 ++ SPDK_TEST_URING=1 00:01:29.705 ++ SPDK_TEST_USDT=1 00:01:29.705 ++ SPDK_RUN_UBSAN=1 00:01:29.705 ++ NET_TYPE=virt 00:01:29.705 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.705 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:29.705 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.705 ++ RUN_NIGHTLY=1 00:01:29.705 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:29.705 + nvme_files=() 00:01:29.705 + declare -A nvme_files 00:01:29.705 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.705 + nvme_files['nvme.img']=5G 00:01:29.705 + nvme_files['nvme-cmb.img']=5G 00:01:29.705 + nvme_files['nvme-multi0.img']=4G 00:01:29.705 + nvme_files['nvme-multi1.img']=4G 00:01:29.705 + nvme_files['nvme-multi2.img']=4G 00:01:29.705 + nvme_files['nvme-openstack.img']=8G 00:01:29.705 + nvme_files['nvme-zns.img']=5G 00:01:29.705 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.705 + (( SPDK_TEST_FTL == 1 )) 00:01:29.705 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.705 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.705 + for nvme in "${!nvme_files[@]}" 00:01:29.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:29.705 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.705 + for nvme in "${!nvme_files[@]}" 00:01:29.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:29.705 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.705 + for nvme in "${!nvme_files[@]}" 00:01:29.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:29.705 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:29.705 + for nvme in "${!nvme_files[@]}" 00:01:29.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:29.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.964 + for nvme in "${!nvme_files[@]}" 00:01:29.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:29.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.964 + for nvme in "${!nvme_files[@]}" 00:01:29.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:29.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.964 + for nvme in "${!nvme_files[@]}" 00:01:29.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:30.223 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:30.223 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:30.223 + echo 'End stage prepare_nvme.sh' 00:01:30.223 End stage prepare_nvme.sh 00:01:30.234 [Pipeline] sh 00:01:30.514 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:30.514 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:30.514 00:01:30.514 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:30.514 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:30.514 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:30.514 HELP=0 00:01:30.514 DRY_RUN=0 00:01:30.514 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:30.514 NVME_DISKS_TYPE=nvme,nvme, 00:01:30.514 NVME_AUTO_CREATE=0 00:01:30.514 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:30.514 NVME_CMB=,, 00:01:30.514 NVME_PMR=,, 00:01:30.514 NVME_ZNS=,, 00:01:30.514 NVME_MS=,, 00:01:30.514 NVME_FDP=,, 00:01:30.514 SPDK_VAGRANT_DISTRO=fedora39 00:01:30.514 SPDK_VAGRANT_VMCPU=10 00:01:30.514 SPDK_VAGRANT_VMRAM=12288 00:01:30.514 SPDK_VAGRANT_PROVIDER=libvirt 00:01:30.514 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:30.514 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:30.514 SPDK_OPENSTACK_NETWORK=0 00:01:30.514 VAGRANT_PACKAGE_BOX=0 00:01:30.514 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:30.514 FORCE_DISTRO=true 00:01:30.514 VAGRANT_BOX_VERSION= 00:01:30.514 EXTRA_VAGRANTFILES= 00:01:30.514 NIC_MODEL=e1000 00:01:30.514 00:01:30.514 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:30.514 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:33.055 Bringing machine 'default' up with 'libvirt' provider... 00:01:33.991 ==> default: Creating image (snapshot of base box volume). 00:01:33.991 ==> default: Creating domain with the following settings... 00:01:33.991 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734394639_70a63be1a1f0c6467ef5 00:01:33.991 ==> default: -- Domain type: kvm 00:01:33.991 ==> default: -- Cpus: 10 00:01:33.991 ==> default: -- Feature: acpi 00:01:33.991 ==> default: -- Feature: apic 00:01:33.991 ==> default: -- Feature: pae 00:01:33.991 ==> default: -- Memory: 12288M 00:01:33.991 ==> default: -- Memory Backing: hugepages: 00:01:33.991 ==> default: -- Management MAC: 00:01:33.991 ==> default: -- Loader: 00:01:33.991 ==> default: -- Nvram: 00:01:33.991 ==> default: -- Base box: spdk/fedora39 00:01:33.991 ==> default: -- Storage pool: default 00:01:33.991 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734394639_70a63be1a1f0c6467ef5.img (20G) 00:01:33.991 ==> default: -- Volume Cache: default 00:01:33.991 ==> default: -- Kernel: 00:01:33.991 ==> default: -- Initrd: 00:01:33.991 ==> default: -- Graphics Type: vnc 00:01:33.991 ==> default: -- Graphics Port: -1 00:01:33.991 ==> default: -- Graphics IP: 127.0.0.1 00:01:33.991 ==> default: -- Graphics Password: Not defined 00:01:33.991 ==> default: -- Video Type: cirrus 00:01:33.991 ==> default: -- Video VRAM: 9216 00:01:33.991 ==> default: -- Sound Type: 00:01:33.991 ==> default: -- Keymap: en-us 00:01:33.991 ==> default: -- TPM Path: 00:01:33.991 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:33.991 ==> default: -- Command line args: 00:01:33.991 ==> default: -> value=-device, 00:01:33.991 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:33.991 ==> default: -> value=-drive, 00:01:33.991 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:33.991 ==> default: -> value=-device, 00:01:33.991 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.991 ==> default: -> value=-device, 00:01:33.991 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:33.991 ==> default: -> value=-drive, 00:01:33.991 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:33.991 ==> default: -> value=-device, 00:01:33.991 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.991 ==> default: -> value=-drive, 00:01:33.991 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:33.991 ==> default: -> value=-device, 00:01:33.991 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.991 ==> default: -> value=-drive, 00:01:33.991 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:33.991 ==> default: -> value=-device, 00:01:33.991 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:33.991 ==> default: Creating shared folders metadata... 00:01:33.991 ==> default: Starting domain. 00:01:35.371 ==> default: Waiting for domain to get an IP address... 00:01:53.460 ==> default: Waiting for SSH to become available... 00:01:53.460 ==> default: Configuring and enabling network interfaces... 00:01:55.995 default: SSH address: 192.168.121.145:22 00:01:55.995 default: SSH username: vagrant 00:01:55.995 default: SSH auth method: private key 00:01:57.935 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.052 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:11.369 ==> default: Mounting SSHFS shared folder... 00:02:12.748 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:12.748 ==> default: Checking Mount.. 00:02:14.125 ==> default: Folder Successfully Mounted! 00:02:14.125 ==> default: Running provisioner: file... 00:02:14.691 default: ~/.gitconfig => .gitconfig 00:02:15.258 00:02:15.259 SUCCESS! 00:02:15.259 00:02:15.259 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:15.259 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.259 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:15.259 00:02:15.267 [Pipeline] } 00:02:15.282 [Pipeline] // stage 00:02:15.290 [Pipeline] dir 00:02:15.291 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:15.293 [Pipeline] { 00:02:15.304 [Pipeline] catchError 00:02:15.306 [Pipeline] { 00:02:15.318 [Pipeline] sh 00:02:15.595 + vagrant ssh-config --host vagrant 00:02:15.595 + sed -ne /^Host/,$p 00:02:15.595 + tee ssh_conf 00:02:18.882 Host vagrant 00:02:18.882 HostName 192.168.121.145 00:02:18.882 User vagrant 00:02:18.882 Port 22 00:02:18.882 UserKnownHostsFile /dev/null 00:02:18.882 StrictHostKeyChecking no 00:02:18.882 PasswordAuthentication no 00:02:18.882 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:18.882 IdentitiesOnly yes 00:02:18.882 LogLevel FATAL 00:02:18.882 ForwardAgent yes 00:02:18.882 ForwardX11 yes 00:02:18.882 00:02:18.896 [Pipeline] withEnv 00:02:18.898 [Pipeline] { 00:02:18.912 [Pipeline] sh 00:02:19.223 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:19.224 source /etc/os-release 00:02:19.224 [[ -e /image.version ]] && img=$(< /image.version) 00:02:19.224 # Minimal, systemd-like check. 00:02:19.224 if [[ -e /.dockerenv ]]; then 00:02:19.224 # Clear garbage from the node's name: 00:02:19.224 # agt-er_autotest_547-896 -> autotest_547-896 00:02:19.224 # $HOSTNAME is the actual container id 00:02:19.224 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:19.224 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:19.224 # We can assume this is a mount from a host where container is running, 00:02:19.224 # so fetch its hostname to easily identify the target swarm worker. 00:02:19.224 container="$(< /etc/hostname) ($agent)" 00:02:19.224 else 00:02:19.224 # Fallback 00:02:19.224 container=$agent 00:02:19.224 fi 00:02:19.224 fi 00:02:19.224 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:19.224 00:02:19.235 [Pipeline] } 00:02:19.255 [Pipeline] // withEnv 00:02:19.274 [Pipeline] setCustomBuildProperty 00:02:19.299 [Pipeline] stage 00:02:19.302 [Pipeline] { (Tests) 00:02:19.333 [Pipeline] sh 00:02:19.606 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:19.619 [Pipeline] sh 00:02:19.900 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:19.914 [Pipeline] timeout 00:02:19.914 Timeout set to expire in 1 hr 0 min 00:02:19.916 [Pipeline] { 00:02:19.930 [Pipeline] sh 00:02:20.211 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.779 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:20.791 [Pipeline] sh 00:02:21.072 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:21.345 [Pipeline] sh 00:02:21.625 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.900 [Pipeline] sh 00:02:22.180 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:22.439 ++ readlink -f spdk_repo 00:02:22.439 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.439 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.439 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.439 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.439 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.439 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.439 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.439 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:22.439 + cd /home/vagrant/spdk_repo 00:02:22.439 + source /etc/os-release 00:02:22.439 ++ NAME='Fedora Linux' 00:02:22.439 ++ VERSION='39 (Cloud Edition)' 00:02:22.439 ++ ID=fedora 00:02:22.439 ++ VERSION_ID=39 00:02:22.439 ++ VERSION_CODENAME= 00:02:22.439 ++ PLATFORM_ID=platform:f39 00:02:22.439 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:22.439 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.439 ++ LOGO=fedora-logo-icon 00:02:22.439 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:22.439 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.439 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:22.439 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.439 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.439 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.439 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:22.439 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.439 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:22.439 ++ SUPPORT_END=2024-11-12 00:02:22.439 ++ VARIANT='Cloud Edition' 00:02:22.439 ++ VARIANT_ID=cloud 00:02:22.439 + uname -a 00:02:22.439 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:22.439 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:22.698 Hugepages 00:02:22.698 node hugesize free / total 00:02:22.698 node0 1048576kB 0 / 0 00:02:22.698 node0 2048kB 0 / 0 00:02:22.698 00:02:22.698 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.960 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.960 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.960 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.960 + rm -f /tmp/spdk-ld-path 00:02:22.960 + source autorun-spdk.conf 00:02:22.960 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.960 ++ SPDK_TEST_NVMF=1 00:02:22.960 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.960 ++ SPDK_TEST_URING=1 00:02:22.960 ++ SPDK_TEST_USDT=1 00:02:22.960 ++ SPDK_RUN_UBSAN=1 00:02:22.960 ++ NET_TYPE=virt 00:02:22.960 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.960 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.960 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.960 ++ RUN_NIGHTLY=1 00:02:22.960 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.960 + [[ -n '' ]] 00:02:22.960 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.960 + for M in /var/spdk/build-*-manifest.txt 00:02:22.960 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:22.960 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.960 + for M in /var/spdk/build-*-manifest.txt 00:02:22.960 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.960 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.960 + for M in /var/spdk/build-*-manifest.txt 00:02:22.960 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.960 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.960 ++ uname 00:02:22.960 + [[ Linux == \L\i\n\u\x ]] 00:02:22.960 + sudo dmesg -T 00:02:22.960 + sudo dmesg --clear 00:02:22.960 + dmesg_pid=5991 00:02:22.960 + [[ Fedora Linux == FreeBSD ]] 00:02:22.960 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.960 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.960 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:22.960 + [[ -x /usr/src/fio-static/fio ]] 00:02:22.960 + sudo dmesg -Tw 00:02:22.960 + export FIO_BIN=/usr/src/fio-static/fio 00:02:22.960 + FIO_BIN=/usr/src/fio-static/fio 00:02:22.960 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:22.960 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:22.960 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:22.960 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.960 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.960 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:22.960 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.960 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.960 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:22.960 Test configuration: 00:02:22.960 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.960 SPDK_TEST_NVMF=1 00:02:22.960 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.960 SPDK_TEST_URING=1 00:02:22.960 SPDK_TEST_USDT=1 00:02:22.960 SPDK_RUN_UBSAN=1 00:02:22.960 NET_TYPE=virt 00:02:22.960 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.960 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.960 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.960 RUN_NIGHTLY=1 00:18:08 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:22.960 00:18:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:22.960 00:18:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:23.220 00:18:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.220 00:18:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.220 00:18:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.220 00:18:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.220 00:18:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.220 00:18:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.220 00:18:08 -- paths/export.sh@5 -- $ export PATH 00:02:23.220 00:18:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.220 00:18:08 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.220 00:18:08 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:23.220 00:18:08 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734394688.XXXXXX 00:02:23.220 00:18:08 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734394688.uIb87Z 00:02:23.220 00:18:08 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:23.220 00:18:08 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:23.220 00:18:08 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:23.220 00:18:08 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:23.220 00:18:08 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:23.220 00:18:08 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.220 00:18:08 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:23.220 00:18:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:23.220 00:18:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.220 00:18:08 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:23.220 00:18:08 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:23.220 00:18:08 -- pm/common@17 -- $ local monitor 00:02:23.220 00:18:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.220 00:18:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.220 00:18:08 -- pm/common@25 -- $ sleep 1 00:02:23.220 00:18:08 -- pm/common@21 -- $ date +%s 00:02:23.220 00:18:09 -- pm/common@21 -- $ date +%s 00:02:23.220 00:18:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734394689 00:02:23.220 00:18:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734394689 00:02:23.220 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734394689_collect-cpu-load.pm.log 00:02:23.220 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734394689_collect-vmstat.pm.log 00:02:24.158 00:18:10 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:24.158 00:18:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.158 00:18:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.158 00:18:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.158 00:18:10 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.158 Tue Dec 17 12:18:10 AM UTC 2024 00:02:24.158 00:18:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.158 v24.09-1-gb18e1bd62 00:02:24.158 00:18:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:24.158 00:18:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.158 00:18:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.158 00:18:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.158 00:18:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.158 00:18:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.158 ************************************ 00:02:24.158 START TEST ubsan 00:02:24.158 ************************************ 00:02:24.158 using ubsan 00:02:24.158 00:18:10 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:24.158 00:02:24.158 real 0m0.000s 00:02:24.158 user 0m0.000s 00:02:24.158 sys 0m0.000s 00:02:24.158 00:18:10 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.158 00:18:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.158 ************************************ 00:02:24.158 END TEST ubsan 00:02:24.158 ************************************ 00:02:24.158 00:18:10 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:24.158 00:18:10 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:24.158 00:18:10 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:24.158 00:18:10 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:24.158 00:18:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.158 00:18:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.158 ************************************ 00:02:24.158 START TEST build_native_dpdk 00:02:24.158 ************************************ 00:02:24.158 00:18:10 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:24.158 caf0f5d395 version: 22.11.4 00:02:24.158 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:24.158 dc9c799c7d vhost: fix missing spinlock unlock 00:02:24.158 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:24.158 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:24.158 00:18:10 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:24.159 patching file config/rte_config.h 00:02:24.159 Hunk #1 succeeded at 60 (offset 1 line). 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:24.159 patching file lib/pcapng/rte_pcapng.c 00:02:24.159 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:24.159 00:18:10 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:24.159 00:18:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:24.424 00:18:10 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:24.424 00:18:10 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:24.424 00:18:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:24.424 00:18:10 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:24.424 00:18:10 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:24.424 00:18:10 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:29.696 The Meson build system 00:02:29.696 Version: 1.5.0 00:02:29.696 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:29.696 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:29.696 Build type: native build 00:02:29.696 Program cat found: YES (/usr/bin/cat) 00:02:29.696 Project name: DPDK 00:02:29.696 Project version: 22.11.4 00:02:29.696 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:29.697 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:29.697 Host machine cpu family: x86_64 00:02:29.697 Host machine cpu: x86_64 00:02:29.697 Message: ## Building in Developer Mode ## 00:02:29.697 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.697 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:29.697 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.697 Program objdump found: YES (/usr/bin/objdump) 00:02:29.697 Program python3 found: YES (/usr/bin/python3) 00:02:29.697 Program cat found: YES (/usr/bin/cat) 00:02:29.697 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:29.697 Checking for size of "void *" : 8 00:02:29.697 Checking for size of "void *" : 8 (cached) 00:02:29.697 Library m found: YES 00:02:29.697 Library numa found: YES 00:02:29.697 Has header "numaif.h" : YES 00:02:29.697 Library fdt found: NO 00:02:29.697 Library execinfo found: NO 00:02:29.697 Has header "execinfo.h" : YES 00:02:29.697 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:29.697 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.697 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.697 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.697 Run-time dependency openssl found: YES 3.1.1 00:02:29.697 Run-time dependency libpcap found: YES 1.10.4 00:02:29.697 Has header "pcap.h" with dependency libpcap: YES 00:02:29.697 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.697 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.697 Compiler for C supports arguments -Wformat: YES 00:02:29.697 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.697 Compiler for C supports arguments -Wformat-security: NO 00:02:29.697 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.697 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.697 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.697 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.697 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.697 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.697 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.697 Compiler for C supports arguments -Wundef: YES 00:02:29.697 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.697 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.697 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.697 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.697 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.697 Compiler for C supports arguments -mavx512f: YES 00:02:29.697 Checking if "AVX512 checking" compiles: YES 00:02:29.697 Fetching value of define "__SSE4_2__" : 1 00:02:29.697 Fetching value of define "__AES__" : 1 00:02:29.697 Fetching value of define "__AVX__" : 1 00:02:29.697 Fetching value of define "__AVX2__" : 1 00:02:29.697 Fetching value of define "__AVX512BW__" : (undefined) 00:02:29.697 Fetching value of define "__AVX512CD__" : (undefined) 00:02:29.697 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:29.697 Fetching value of define "__AVX512F__" : (undefined) 00:02:29.697 Fetching value of define "__AVX512VL__" : (undefined) 00:02:29.697 Fetching value of define "__PCLMUL__" : 1 00:02:29.697 Fetching value of define "__RDRND__" : 1 00:02:29.697 Fetching value of define "__RDSEED__" : 1 00:02:29.697 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:29.697 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.697 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.697 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.697 Checking for function "getentropy" : YES 00:02:29.697 Message: lib/eal: Defining dependency "eal" 00:02:29.697 Message: lib/ring: Defining dependency "ring" 00:02:29.697 Message: lib/rcu: Defining dependency "rcu" 00:02:29.697 Message: lib/mempool: Defining dependency "mempool" 00:02:29.697 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.697 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.697 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.697 Compiler for C supports arguments -mpclmul: YES 00:02:29.697 Compiler for C supports arguments -maes: YES 00:02:29.697 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.697 Compiler for C supports arguments -mavx512bw: YES 00:02:29.697 Compiler for C supports arguments -mavx512dq: YES 00:02:29.697 Compiler for C supports arguments -mavx512vl: YES 00:02:29.697 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.697 Compiler for C supports arguments -mavx2: YES 00:02:29.697 Compiler for C supports arguments -mavx: YES 00:02:29.697 Message: lib/net: Defining dependency "net" 00:02:29.697 Message: lib/meter: Defining dependency "meter" 00:02:29.697 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.697 Message: lib/pci: Defining dependency "pci" 00:02:29.697 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.697 Message: lib/metrics: Defining dependency "metrics" 00:02:29.697 Message: lib/hash: Defining dependency "hash" 00:02:29.697 Message: lib/timer: Defining dependency "timer" 00:02:29.697 Fetching value of define "__AVX2__" : 1 (cached) 00:02:29.697 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.697 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:29.697 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:29.697 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:29.697 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:29.697 Message: lib/acl: Defining dependency "acl" 00:02:29.697 Message: lib/bbdev: Defining dependency "bbdev" 00:02:29.697 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:29.697 Run-time dependency libelf found: YES 0.191 00:02:29.697 Message: lib/bpf: Defining dependency "bpf" 00:02:29.697 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:29.697 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.697 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.697 Message: lib/distributor: Defining dependency "distributor" 00:02:29.697 Message: lib/efd: Defining dependency "efd" 00:02:29.697 Message: lib/eventdev: Defining dependency "eventdev" 00:02:29.697 Message: lib/gpudev: Defining dependency "gpudev" 00:02:29.697 Message: lib/gro: Defining dependency "gro" 00:02:29.697 Message: lib/gso: Defining dependency "gso" 00:02:29.697 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:29.697 Message: lib/jobstats: Defining dependency "jobstats" 00:02:29.697 Message: lib/latencystats: Defining dependency "latencystats" 00:02:29.697 Message: lib/lpm: Defining dependency "lpm" 00:02:29.697 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.697 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:29.697 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:29.697 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:29.697 Message: lib/member: Defining dependency "member" 00:02:29.697 Message: lib/pcapng: Defining dependency "pcapng" 00:02:29.697 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.697 Message: lib/power: Defining dependency "power" 00:02:29.697 Message: lib/rawdev: Defining dependency "rawdev" 00:02:29.697 Message: lib/regexdev: Defining dependency "regexdev" 00:02:29.697 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.697 Message: lib/rib: Defining dependency "rib" 00:02:29.697 Message: lib/reorder: Defining dependency "reorder" 00:02:29.697 Message: lib/sched: Defining dependency "sched" 00:02:29.697 Message: lib/security: Defining dependency "security" 00:02:29.697 Message: lib/stack: Defining dependency "stack" 00:02:29.697 Has header "linux/userfaultfd.h" : YES 00:02:29.697 Message: lib/vhost: Defining dependency "vhost" 00:02:29.697 Message: lib/ipsec: Defining dependency "ipsec" 00:02:29.697 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.697 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:29.697 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:29.697 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:29.697 Message: lib/fib: Defining dependency "fib" 00:02:29.697 Message: lib/port: Defining dependency "port" 00:02:29.697 Message: lib/pdump: Defining dependency "pdump" 00:02:29.697 Message: lib/table: Defining dependency "table" 00:02:29.697 Message: lib/pipeline: Defining dependency "pipeline" 00:02:29.697 Message: lib/graph: Defining dependency "graph" 00:02:29.697 Message: lib/node: Defining dependency "node" 00:02:29.697 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.697 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.697 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.697 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.697 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:29.697 Compiler for C supports arguments -Wno-unused-value: YES 00:02:29.697 Compiler for C supports arguments -Wno-format: YES 00:02:29.697 Compiler for C supports arguments -Wno-format-security: YES 00:02:29.697 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:31.075 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:31.075 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:31.075 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:31.075 Fetching value of define "__AVX2__" : 1 (cached) 00:02:31.075 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:31.075 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.075 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:31.075 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:31.075 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:31.075 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:31.075 Configuring doxy-api.conf using configuration 00:02:31.075 Program sphinx-build found: NO 00:02:31.075 Configuring rte_build_config.h using configuration 00:02:31.075 Message: 00:02:31.075 ================= 00:02:31.075 Applications Enabled 00:02:31.075 ================= 00:02:31.076 00:02:31.076 apps: 00:02:31.076 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:31.076 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:31.076 test-security-perf, 00:02:31.076 00:02:31.076 Message: 00:02:31.076 ================= 00:02:31.076 Libraries Enabled 00:02:31.076 ================= 00:02:31.076 00:02:31.076 libs: 00:02:31.076 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:31.076 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:31.076 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:31.076 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:31.076 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:31.076 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:31.076 table, pipeline, graph, node, 00:02:31.076 00:02:31.076 Message: 00:02:31.076 =============== 00:02:31.076 Drivers Enabled 00:02:31.076 =============== 00:02:31.076 00:02:31.076 common: 00:02:31.076 00:02:31.076 bus: 00:02:31.076 pci, vdev, 00:02:31.076 mempool: 00:02:31.076 ring, 00:02:31.076 dma: 00:02:31.076 00:02:31.076 net: 00:02:31.076 i40e, 00:02:31.076 raw: 00:02:31.076 00:02:31.076 crypto: 00:02:31.076 00:02:31.076 compress: 00:02:31.076 00:02:31.076 regex: 00:02:31.076 00:02:31.076 vdpa: 00:02:31.076 00:02:31.076 event: 00:02:31.076 00:02:31.076 baseband: 00:02:31.076 00:02:31.076 gpu: 00:02:31.076 00:02:31.076 00:02:31.076 Message: 00:02:31.076 ================= 00:02:31.076 Content Skipped 00:02:31.076 ================= 00:02:31.076 00:02:31.076 apps: 00:02:31.076 00:02:31.076 libs: 00:02:31.076 kni: explicitly disabled via build config (deprecated lib) 00:02:31.076 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:31.076 00:02:31.076 drivers: 00:02:31.076 common/cpt: not in enabled drivers build config 00:02:31.076 common/dpaax: not in enabled drivers build config 00:02:31.076 common/iavf: not in enabled drivers build config 00:02:31.076 common/idpf: not in enabled drivers build config 00:02:31.076 common/mvep: not in enabled drivers build config 00:02:31.076 common/octeontx: not in enabled drivers build config 00:02:31.076 bus/auxiliary: not in enabled drivers build config 00:02:31.076 bus/dpaa: not in enabled drivers build config 00:02:31.076 bus/fslmc: not in enabled drivers build config 00:02:31.076 bus/ifpga: not in enabled drivers build config 00:02:31.076 bus/vmbus: not in enabled drivers build config 00:02:31.076 common/cnxk: not in enabled drivers build config 00:02:31.076 common/mlx5: not in enabled drivers build config 00:02:31.076 common/qat: not in enabled drivers build config 00:02:31.076 common/sfc_efx: not in enabled drivers build config 00:02:31.076 mempool/bucket: not in enabled drivers build config 00:02:31.076 mempool/cnxk: not in enabled drivers build config 00:02:31.076 mempool/dpaa: not in enabled drivers build config 00:02:31.076 mempool/dpaa2: not in enabled drivers build config 00:02:31.076 mempool/octeontx: not in enabled drivers build config 00:02:31.076 mempool/stack: not in enabled drivers build config 00:02:31.076 dma/cnxk: not in enabled drivers build config 00:02:31.076 dma/dpaa: not in enabled drivers build config 00:02:31.076 dma/dpaa2: not in enabled drivers build config 00:02:31.076 dma/hisilicon: not in enabled drivers build config 00:02:31.076 dma/idxd: not in enabled drivers build config 00:02:31.076 dma/ioat: not in enabled drivers build config 00:02:31.076 dma/skeleton: not in enabled drivers build config 00:02:31.076 net/af_packet: not in enabled drivers build config 00:02:31.076 net/af_xdp: not in enabled drivers build config 00:02:31.076 net/ark: not in enabled drivers build config 00:02:31.076 net/atlantic: not in enabled drivers build config 00:02:31.076 net/avp: not in enabled drivers build config 00:02:31.076 net/axgbe: not in enabled drivers build config 00:02:31.076 net/bnx2x: not in enabled drivers build config 00:02:31.076 net/bnxt: not in enabled drivers build config 00:02:31.076 net/bonding: not in enabled drivers build config 00:02:31.076 net/cnxk: not in enabled drivers build config 00:02:31.076 net/cxgbe: not in enabled drivers build config 00:02:31.076 net/dpaa: not in enabled drivers build config 00:02:31.076 net/dpaa2: not in enabled drivers build config 00:02:31.076 net/e1000: not in enabled drivers build config 00:02:31.076 net/ena: not in enabled drivers build config 00:02:31.076 net/enetc: not in enabled drivers build config 00:02:31.076 net/enetfec: not in enabled drivers build config 00:02:31.076 net/enic: not in enabled drivers build config 00:02:31.076 net/failsafe: not in enabled drivers build config 00:02:31.076 net/fm10k: not in enabled drivers build config 00:02:31.076 net/gve: not in enabled drivers build config 00:02:31.076 net/hinic: not in enabled drivers build config 00:02:31.076 net/hns3: not in enabled drivers build config 00:02:31.076 net/iavf: not in enabled drivers build config 00:02:31.076 net/ice: not in enabled drivers build config 00:02:31.076 net/idpf: not in enabled drivers build config 00:02:31.076 net/igc: not in enabled drivers build config 00:02:31.076 net/ionic: not in enabled drivers build config 00:02:31.076 net/ipn3ke: not in enabled drivers build config 00:02:31.076 net/ixgbe: not in enabled drivers build config 00:02:31.076 net/kni: not in enabled drivers build config 00:02:31.076 net/liquidio: not in enabled drivers build config 00:02:31.076 net/mana: not in enabled drivers build config 00:02:31.076 net/memif: not in enabled drivers build config 00:02:31.076 net/mlx4: not in enabled drivers build config 00:02:31.076 net/mlx5: not in enabled drivers build config 00:02:31.076 net/mvneta: not in enabled drivers build config 00:02:31.076 net/mvpp2: not in enabled drivers build config 00:02:31.076 net/netvsc: not in enabled drivers build config 00:02:31.076 net/nfb: not in enabled drivers build config 00:02:31.076 net/nfp: not in enabled drivers build config 00:02:31.076 net/ngbe: not in enabled drivers build config 00:02:31.076 net/null: not in enabled drivers build config 00:02:31.076 net/octeontx: not in enabled drivers build config 00:02:31.076 net/octeon_ep: not in enabled drivers build config 00:02:31.076 net/pcap: not in enabled drivers build config 00:02:31.076 net/pfe: not in enabled drivers build config 00:02:31.076 net/qede: not in enabled drivers build config 00:02:31.076 net/ring: not in enabled drivers build config 00:02:31.076 net/sfc: not in enabled drivers build config 00:02:31.076 net/softnic: not in enabled drivers build config 00:02:31.076 net/tap: not in enabled drivers build config 00:02:31.076 net/thunderx: not in enabled drivers build config 00:02:31.076 net/txgbe: not in enabled drivers build config 00:02:31.076 net/vdev_netvsc: not in enabled drivers build config 00:02:31.076 net/vhost: not in enabled drivers build config 00:02:31.076 net/virtio: not in enabled drivers build config 00:02:31.076 net/vmxnet3: not in enabled drivers build config 00:02:31.076 raw/cnxk_bphy: not in enabled drivers build config 00:02:31.076 raw/cnxk_gpio: not in enabled drivers build config 00:02:31.076 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:31.076 raw/ifpga: not in enabled drivers build config 00:02:31.076 raw/ntb: not in enabled drivers build config 00:02:31.076 raw/skeleton: not in enabled drivers build config 00:02:31.076 crypto/armv8: not in enabled drivers build config 00:02:31.076 crypto/bcmfs: not in enabled drivers build config 00:02:31.076 crypto/caam_jr: not in enabled drivers build config 00:02:31.076 crypto/ccp: not in enabled drivers build config 00:02:31.076 crypto/cnxk: not in enabled drivers build config 00:02:31.076 crypto/dpaa_sec: not in enabled drivers build config 00:02:31.076 crypto/dpaa2_sec: not in enabled drivers build config 00:02:31.076 crypto/ipsec_mb: not in enabled drivers build config 00:02:31.076 crypto/mlx5: not in enabled drivers build config 00:02:31.076 crypto/mvsam: not in enabled drivers build config 00:02:31.076 crypto/nitrox: not in enabled drivers build config 00:02:31.076 crypto/null: not in enabled drivers build config 00:02:31.076 crypto/octeontx: not in enabled drivers build config 00:02:31.076 crypto/openssl: not in enabled drivers build config 00:02:31.076 crypto/scheduler: not in enabled drivers build config 00:02:31.076 crypto/uadk: not in enabled drivers build config 00:02:31.076 crypto/virtio: not in enabled drivers build config 00:02:31.076 compress/isal: not in enabled drivers build config 00:02:31.076 compress/mlx5: not in enabled drivers build config 00:02:31.076 compress/octeontx: not in enabled drivers build config 00:02:31.076 compress/zlib: not in enabled drivers build config 00:02:31.076 regex/mlx5: not in enabled drivers build config 00:02:31.076 regex/cn9k: not in enabled drivers build config 00:02:31.076 vdpa/ifc: not in enabled drivers build config 00:02:31.076 vdpa/mlx5: not in enabled drivers build config 00:02:31.076 vdpa/sfc: not in enabled drivers build config 00:02:31.076 event/cnxk: not in enabled drivers build config 00:02:31.076 event/dlb2: not in enabled drivers build config 00:02:31.076 event/dpaa: not in enabled drivers build config 00:02:31.076 event/dpaa2: not in enabled drivers build config 00:02:31.076 event/dsw: not in enabled drivers build config 00:02:31.076 event/opdl: not in enabled drivers build config 00:02:31.076 event/skeleton: not in enabled drivers build config 00:02:31.076 event/sw: not in enabled drivers build config 00:02:31.076 event/octeontx: not in enabled drivers build config 00:02:31.076 baseband/acc: not in enabled drivers build config 00:02:31.076 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:31.076 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:31.076 baseband/la12xx: not in enabled drivers build config 00:02:31.076 baseband/null: not in enabled drivers build config 00:02:31.076 baseband/turbo_sw: not in enabled drivers build config 00:02:31.076 gpu/cuda: not in enabled drivers build config 00:02:31.076 00:02:31.076 00:02:31.076 Build targets in project: 314 00:02:31.076 00:02:31.076 DPDK 22.11.4 00:02:31.076 00:02:31.076 User defined options 00:02:31.076 libdir : lib 00:02:31.076 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:31.076 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:31.077 c_link_args : 00:02:31.077 enable_docs : false 00:02:31.077 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:31.077 enable_kmods : false 00:02:31.077 machine : native 00:02:31.077 tests : false 00:02:31.077 00:02:31.077 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.077 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:31.336 00:18:17 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:31.336 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:31.336 [1/743] Generating lib/rte_telemetry_def with a custom command 00:02:31.336 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:31.336 [3/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:31.336 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:31.336 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:31.594 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:31.594 [7/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:31.594 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:31.594 [9/743] Linking static target lib/librte_kvargs.a 00:02:31.594 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:31.594 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:31.594 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:31.594 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:31.594 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:31.594 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:31.594 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:31.594 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:31.594 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:31.852 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:31.852 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:31.852 [21/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.852 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:31.852 [23/743] Linking target lib/librte_kvargs.so.23.0 00:02:31.852 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:31.852 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:31.852 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.852 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.852 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:32.111 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:32.111 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:32.111 [31/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:32.111 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:32.111 [33/743] Linking static target lib/librte_telemetry.a 00:02:32.111 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.111 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:32.111 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:32.111 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:32.111 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:32.111 [39/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:32.111 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:32.370 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:32.370 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:32.370 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.370 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:32.370 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:32.370 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.370 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:32.370 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.628 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:32.628 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:32.628 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.628 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:32.628 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:32.628 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:32.628 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:32.628 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:32.628 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:32.628 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:32.628 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:32.628 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:32.628 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:32.628 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:32.628 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:32.886 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:32.886 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:32.886 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.886 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:32.886 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.886 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:32.886 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.886 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:32.886 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:32.886 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.886 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:33.144 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:33.144 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:33.144 [77/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:33.144 [78/743] Generating lib/rte_eal_def with a custom command 00:02:33.144 [79/743] Generating lib/rte_eal_mingw with a custom command 00:02:33.144 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:33.144 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:33.144 [82/743] Generating lib/rte_ring_def with a custom command 00:02:33.144 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:33.144 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:33.144 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:33.145 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:33.145 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:33.145 [88/743] Linking static target lib/librte_ring.a 00:02:33.145 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:33.145 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:33.402 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:33.402 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:33.402 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:33.402 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.661 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:33.661 [96/743] Linking static target lib/librte_eal.a 00:02:33.661 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:33.661 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:33.919 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.919 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:33.919 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:33.919 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:33.919 [103/743] Linking static target lib/librte_rcu.a 00:02:33.919 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.919 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:34.177 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:34.177 [107/743] Linking static target lib/librte_mempool.a 00:02:34.177 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.177 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:34.434 [110/743] Generating lib/rte_net_def with a custom command 00:02:34.434 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:34.434 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:34.434 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:34.434 [114/743] Generating lib/rte_meter_def with a custom command 00:02:34.434 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:34.434 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:34.434 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.434 [118/743] Linking static target lib/librte_meter.a 00:02:34.692 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:34.692 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:34.692 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:34.692 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.950 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:34.950 [124/743] Linking static target lib/librte_mbuf.a 00:02:34.950 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.950 [126/743] Linking static target lib/librte_net.a 00:02:34.950 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.208 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.208 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.208 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.208 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.208 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.467 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.467 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.725 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.983 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.983 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:35.983 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:35.983 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.983 [140/743] Generating lib/rte_pci_def with a custom command 00:02:36.243 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:36.243 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:36.243 [143/743] Linking static target lib/librte_pci.a 00:02:36.243 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:36.243 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:36.243 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:36.243 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:36.243 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:36.243 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:36.243 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.243 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:36.501 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:36.501 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:36.501 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:36.501 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:36.501 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:36.501 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:36.501 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:36.501 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:36.501 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:36.501 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:36.501 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:36.759 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:36.760 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:36.760 [165/743] Generating lib/rte_hash_def with a custom command 00:02:36.760 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:36.760 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:36.760 [168/743] Generating lib/rte_timer_def with a custom command 00:02:36.760 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:36.760 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:36.760 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:36.760 [172/743] Linking static target lib/librte_cmdline.a 00:02:37.018 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:37.275 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:37.275 [175/743] Linking static target lib/librte_metrics.a 00:02:37.275 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:37.275 [177/743] Linking static target lib/librte_timer.a 00:02:37.532 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.532 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.790 [180/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:37.790 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:37.790 [182/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:37.790 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.790 [184/743] Linking static target lib/librte_ethdev.a 00:02:38.356 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:38.356 [186/743] Generating lib/rte_acl_def with a custom command 00:02:38.356 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:38.356 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:38.356 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:38.356 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:38.356 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:38.614 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:38.614 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:38.871 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:39.129 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:39.129 [196/743] Linking static target lib/librte_bitratestats.a 00:02:39.129 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:39.386 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.386 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:39.387 [200/743] Linking static target lib/librte_bbdev.a 00:02:39.387 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:39.644 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:39.645 [203/743] Linking static target lib/librte_hash.a 00:02:39.903 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:39.903 [205/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:39.903 [206/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:39.903 [207/743] Linking static target lib/acl/libavx512_tmp.a 00:02:39.903 [208/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.161 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:40.419 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.419 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:40.419 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:40.419 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:40.419 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:40.419 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:40.419 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:40.677 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:40.677 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:40.677 [219/743] Linking static target lib/librte_acl.a 00:02:40.677 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:40.677 [221/743] Linking static target lib/librte_cfgfile.a 00:02:40.935 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:40.935 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:40.935 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:40.935 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.935 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.193 [227/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.193 [228/743] Linking target lib/librte_eal.so.23.0 00:02:41.193 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:41.193 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.193 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:41.193 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:41.193 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:41.193 [234/743] Linking target lib/librte_ring.so.23.0 00:02:41.452 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:41.452 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.452 [237/743] Linking target lib/librte_meter.so.23.0 00:02:41.452 [238/743] Linking target lib/librte_pci.so.23.0 00:02:41.452 [239/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:41.452 [240/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:41.452 [241/743] Linking target lib/librte_timer.so.23.0 00:02:41.452 [242/743] Linking target lib/librte_rcu.so.23.0 00:02:41.452 [243/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:41.452 [244/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:41.452 [245/743] Linking target lib/librte_mempool.so.23.0 00:02:41.452 [246/743] Linking target lib/librte_acl.so.23.0 00:02:41.710 [247/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:41.710 [248/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:41.710 [249/743] Linking static target lib/librte_bpf.a 00:02:41.710 [250/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:41.710 [251/743] Linking static target lib/librte_compressdev.a 00:02:41.710 [252/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:41.710 [253/743] Linking target lib/librte_cfgfile.so.23.0 00:02:41.710 [254/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:41.710 [255/743] Linking target lib/librte_mbuf.so.23.0 00:02:41.710 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:41.710 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:02:41.710 [258/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.969 [259/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:41.969 [260/743] Generating lib/rte_efd_def with a custom command 00:02:41.969 [261/743] Linking target lib/librte_net.so.23.0 00:02:41.969 [262/743] Linking target lib/librte_bbdev.so.23.0 00:02:41.969 [263/743] Generating lib/rte_efd_mingw with a custom command 00:02:41.969 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:41.969 [265/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.969 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:41.969 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:42.227 [268/743] Linking target lib/librte_hash.so.23.0 00:02:42.227 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:42.227 [270/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:42.228 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:42.228 [272/743] Linking static target lib/librte_distributor.a 00:02:42.486 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.486 [274/743] Linking target lib/librte_compressdev.so.23.0 00:02:42.486 [275/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.486 [276/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.486 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:42.744 [278/743] Linking target lib/librte_distributor.so.23.0 00:02:42.744 [279/743] Linking target lib/librte_ethdev.so.23.0 00:02:42.744 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:42.744 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:42.744 [282/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:42.744 [283/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:42.744 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:42.744 [285/743] Linking target lib/librte_bpf.so.23.0 00:02:43.003 [286/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:43.003 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:43.003 [288/743] Linking target lib/librte_bitratestats.so.23.0 00:02:43.003 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:43.003 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:43.261 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:43.261 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:43.519 [293/743] Linking static target lib/librte_efd.a 00:02:43.519 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:43.519 [295/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.778 [296/743] Linking target lib/librte_efd.so.23.0 00:02:43.778 [297/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.778 [298/743] Linking static target lib/librte_cryptodev.a 00:02:43.778 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:43.778 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:43.778 [301/743] Generating lib/rte_gro_def with a custom command 00:02:44.036 [302/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:44.036 [303/743] Linking static target lib/librte_gpudev.a 00:02:44.036 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:44.036 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:44.036 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:44.036 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:44.294 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:44.552 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:44.552 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:44.552 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:44.552 [312/743] Generating lib/rte_gso_def with a custom command 00:02:44.552 [313/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:44.552 [314/743] Generating lib/rte_gso_mingw with a custom command 00:02:44.552 [315/743] Linking static target lib/librte_gro.a 00:02:44.844 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.844 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:44.844 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:44.844 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:44.844 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.845 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:44.845 [322/743] Linking target lib/librte_gro.so.23.0 00:02:45.104 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:45.104 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:45.104 [325/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:45.104 [326/743] Linking static target lib/librte_jobstats.a 00:02:45.104 [327/743] Generating lib/rte_jobstats_def with a custom command 00:02:45.104 [328/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:45.104 [329/743] Linking static target lib/librte_gso.a 00:02:45.104 [330/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:45.362 [331/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:45.362 [332/743] Linking static target lib/librte_eventdev.a 00:02:45.362 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.362 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:45.362 [335/743] Linking target lib/librte_gso.so.23.0 00:02:45.362 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:45.621 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:45.621 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:45.621 [339/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.621 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:45.621 [341/743] Linking target lib/librte_jobstats.so.23.0 00:02:45.621 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:45.621 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:45.621 [344/743] Generating lib/rte_lpm_mingw with a custom command 00:02:45.621 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:45.879 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:45.879 [347/743] Linking static target lib/librte_ip_frag.a 00:02:45.879 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.879 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:46.138 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:46.138 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.138 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:46.138 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:46.396 [354/743] Linking static target lib/librte_latencystats.a 00:02:46.396 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:46.396 [356/743] Generating lib/rte_member_def with a custom command 00:02:46.396 [357/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:46.396 [358/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:46.396 [359/743] Generating lib/rte_member_mingw with a custom command 00:02:46.396 [360/743] Generating lib/rte_pcapng_def with a custom command 00:02:46.396 [361/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:46.396 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:46.396 [363/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:46.396 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.654 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.654 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:46.654 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.654 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.654 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:46.913 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:46.913 [371/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:46.913 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:46.913 [373/743] Linking static target lib/librte_lpm.a 00:02:46.913 [374/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:47.171 [375/743] Generating lib/rte_power_def with a custom command 00:02:47.171 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:47.171 [377/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.171 [378/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.171 [379/743] Generating lib/rte_rawdev_def with a custom command 00:02:47.171 [380/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:47.171 [381/743] Linking target lib/librte_eventdev.so.23.0 00:02:47.429 [382/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.429 [383/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.429 [384/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:47.429 [385/743] Linking static target lib/librte_pcapng.a 00:02:47.429 [386/743] Generating lib/rte_regexdev_def with a custom command 00:02:47.429 [387/743] Linking target lib/librte_lpm.so.23.0 00:02:47.429 [388/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:47.429 [389/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:47.429 [390/743] Generating lib/rte_dmadev_def with a custom command 00:02:47.429 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:47.429 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:47.429 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.429 [394/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:47.429 [395/743] Linking static target lib/librte_rawdev.a 00:02:47.429 [396/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:47.429 [397/743] Generating lib/rte_rib_def with a custom command 00:02:47.429 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:47.687 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:47.687 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:47.687 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.687 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:47.687 [403/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.687 [404/743] Linking static target lib/librte_power.a 00:02:47.687 [405/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.946 [406/743] Linking static target lib/librte_dmadev.a 00:02:47.946 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:47.946 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.946 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:47.946 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:47.946 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:48.204 [412/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:48.204 [413/743] Linking static target lib/librte_regexdev.a 00:02:48.204 [414/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:48.204 [415/743] Linking static target lib/librte_member.a 00:02:48.204 [416/743] Generating lib/rte_sched_def with a custom command 00:02:48.204 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:48.204 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:48.204 [419/743] Generating lib/rte_security_def with a custom command 00:02:48.204 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:48.204 [421/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:48.463 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:48.463 [423/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.463 [424/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.463 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:48.463 [426/743] Linking static target lib/librte_reorder.a 00:02:48.463 [427/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.463 [428/743] Generating lib/rte_stack_def with a custom command 00:02:48.463 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:48.463 [430/743] Linking target lib/librte_dmadev.so.23.0 00:02:48.463 [431/743] Linking static target lib/librte_stack.a 00:02:48.463 [432/743] Generating lib/rte_stack_mingw with a custom command 00:02:48.463 [433/743] Linking target lib/librte_member.so.23.0 00:02:48.463 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:48.721 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.721 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.721 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.721 [438/743] Linking target lib/librte_stack.so.23.0 00:02:48.721 [439/743] Linking target lib/librte_reorder.so.23.0 00:02:48.721 [440/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.721 [441/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:48.721 [442/743] Linking static target lib/librte_rib.a 00:02:48.721 [443/743] Linking target lib/librte_power.so.23.0 00:02:48.721 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.721 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:48.979 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.979 [447/743] Linking static target lib/librte_security.a 00:02:49.241 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.241 [449/743] Linking target lib/librte_rib.so.23.0 00:02:49.241 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.241 [451/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:49.241 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:49.241 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:49.499 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.499 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.499 [456/743] Linking target lib/librte_security.so.23.0 00:02:49.499 [457/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:49.499 [458/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.757 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:49.757 [460/743] Linking static target lib/librte_sched.a 00:02:50.015 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.015 [462/743] Linking target lib/librte_sched.so.23.0 00:02:50.274 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:50.274 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:50.274 [465/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.274 [466/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:50.274 [467/743] Generating lib/rte_ipsec_def with a custom command 00:02:50.274 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:50.274 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:50.532 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:50.532 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:50.790 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:51.048 [473/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:51.048 [474/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:51.048 [475/743] Generating lib/rte_fib_def with a custom command 00:02:51.048 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:51.048 [477/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:51.048 [478/743] Generating lib/rte_fib_mingw with a custom command 00:02:51.048 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:51.048 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:51.306 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:51.306 [482/743] Linking static target lib/librte_ipsec.a 00:02:51.564 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.564 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:51.564 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:51.822 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:51.822 [487/743] Linking static target lib/librte_fib.a 00:02:51.822 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:52.080 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:52.080 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:52.080 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:52.080 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.080 [493/743] Linking target lib/librte_fib.so.23.0 00:02:52.337 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:52.903 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:52.903 [496/743] Generating lib/rte_port_def with a custom command 00:02:52.903 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:52.903 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:52.903 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:52.903 [500/743] Generating lib/rte_pdump_def with a custom command 00:02:52.903 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:52.903 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:02:53.161 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:53.161 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:53.161 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:53.420 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:53.420 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:53.420 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:53.420 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:53.420 [510/743] Linking static target lib/librte_port.a 00:02:53.986 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:53.986 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:53.986 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.986 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:53.986 [515/743] Linking target lib/librte_port.so.23.0 00:02:53.986 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:54.244 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:54.244 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:54.244 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:54.244 [520/743] Linking static target lib/librte_pdump.a 00:02:54.519 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.519 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:54.519 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:54.519 [524/743] Generating lib/rte_table_def with a custom command 00:02:54.787 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:54.787 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:54.787 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:55.045 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.045 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:55.045 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:55.303 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:55.303 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:55.303 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:55.560 [534/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.560 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:55.560 [536/743] Linking static target lib/librte_table.a 00:02:55.560 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:55.818 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:56.076 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.076 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:56.076 [541/743] Linking target lib/librte_table.so.23.0 00:02:56.076 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:56.076 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:56.334 [544/743] Generating lib/rte_graph_def with a custom command 00:02:56.334 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:56.334 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:56.334 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:56.592 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:56.851 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.851 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.851 [551/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.851 [552/743] Linking static target lib/librte_graph.a 00:02:57.109 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:57.109 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:57.109 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:57.675 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:57.675 [557/743] Generating lib/rte_node_def with a custom command 00:02:57.675 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:57.675 [559/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:57.675 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.933 [561/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.933 [562/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:57.933 [563/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.933 [564/743] Linking target lib/librte_graph.so.23.0 00:02:57.933 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.933 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.933 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:57.933 [568/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:57.933 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:58.191 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.191 [571/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.191 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:58.191 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:58.191 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:58.191 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:58.191 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:58.191 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:58.191 [578/743] Linking static target lib/librte_node.a 00:02:58.191 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.191 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.449 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.449 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.449 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.449 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.449 [585/743] Linking target lib/librte_node.so.23.0 00:02:58.449 [586/743] Linking static target drivers/librte_bus_vdev.a 00:02:58.449 [587/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:58.449 [588/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.449 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.708 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.708 [591/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.708 [592/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.708 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.708 [594/743] Linking static target drivers/librte_bus_pci.a 00:02:58.966 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:58.966 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:59.223 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.223 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:59.223 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:59.223 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:59.223 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:59.223 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:59.479 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:59.479 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:59.737 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:59.737 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.737 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:59.737 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:59.737 [609/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:59.737 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:00.303 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:00.561 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:00.561 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:00.561 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:01.128 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:01.128 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:01.128 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:01.694 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:01.694 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:01.953 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:01.953 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:01.953 [622/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:01.953 [623/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:01.953 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:02.211 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:03.145 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:03.403 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:03.403 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:03.403 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:03.662 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:03.662 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:03.662 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:03.662 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:03.662 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:03.924 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:04.182 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:04.439 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:04.440 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:04.440 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:04.698 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:04.698 [641/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:04.956 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:04.956 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.956 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:04.956 [645/743] Linking static target drivers/librte_net_i40e.a 00:03:04.956 [646/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:04.956 [647/743] Linking static target lib/librte_vhost.a 00:03:05.214 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:05.214 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:05.472 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:05.472 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:05.472 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.472 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:05.731 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:05.731 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:05.731 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:05.989 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:06.247 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.247 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:06.506 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:06.506 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:06.506 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:06.506 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:06.506 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:06.506 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:06.764 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:06.764 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:07.022 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:07.022 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:07.281 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:07.539 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:07.539 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:07.539 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:08.106 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:08.106 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:08.364 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:08.623 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:08.623 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:08.623 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:08.881 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:08.881 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:08.881 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:09.139 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:09.139 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:09.139 [685/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:09.398 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:09.398 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:09.398 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:09.656 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:09.913 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:09.913 [691/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:09.913 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:09.913 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:09.913 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:10.480 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:10.480 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:10.480 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:10.738 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:10.996 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:11.255 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:11.255 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:11.255 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:11.513 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:11.513 [704/743] Linking static target lib/librte_pipeline.a 00:03:11.513 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:11.771 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:11.771 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:12.029 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:12.029 [709/743] Linking target app/dpdk-dumpcap 00:03:12.029 [710/743] Linking target app/dpdk-pdump 00:03:12.288 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:12.288 [712/743] Linking target app/dpdk-proc-info 00:03:12.288 [713/743] Linking target app/dpdk-test-acl 00:03:12.546 [714/743] Linking target app/dpdk-test-bbdev 00:03:12.546 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:12.546 [716/743] Linking target app/dpdk-test-cmdline 00:03:12.546 [717/743] Linking target app/dpdk-test-crypto-perf 00:03:12.546 [718/743] Linking target app/dpdk-test-compress-perf 00:03:12.804 [719/743] Linking target app/dpdk-test-eventdev 00:03:12.804 [720/743] Linking target app/dpdk-test-fib 00:03:13.062 [721/743] Linking target app/dpdk-test-gpudev 00:03:13.062 [722/743] Linking target app/dpdk-test-flow-perf 00:03:13.062 [723/743] Linking target app/dpdk-test-pipeline 00:03:13.062 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:13.321 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:13.579 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:13.579 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:13.838 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:13.838 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:13.838 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:14.096 [731/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.096 [732/743] Linking target lib/librte_pipeline.so.23.0 00:03:14.096 [733/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:14.354 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:14.354 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:14.613 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:14.613 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:14.881 [738/743] Linking target app/dpdk-test-sad 00:03:14.881 [739/743] Linking target app/dpdk-test-regex 00:03:15.156 [740/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:15.156 [741/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:15.415 [742/743] Linking target app/dpdk-test-security-perf 00:03:15.415 [743/743] Linking target app/dpdk-testpmd 00:03:15.674 00:19:01 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:15.674 00:19:01 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:15.674 00:19:01 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:15.674 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:15.674 [0/1] Installing files. 00:03:15.936 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:15.936 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.937 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.938 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:15.939 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:15.940 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:15.940 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.940 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.199 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:16.200 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:16.200 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:16.200 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.200 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:16.200 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.200 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.461 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.462 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.463 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:16.464 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:16.464 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:16.464 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:16.464 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:16.464 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:16.464 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:16.464 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:16.464 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:16.464 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:16.464 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:16.464 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:16.464 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:16.464 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:16.464 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:16.464 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:16.464 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:16.464 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:16.464 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:16.464 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:16.464 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:16.464 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:16.464 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:16.464 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:16.464 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:16.464 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:16.464 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:16.464 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:16.464 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:16.464 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:16.464 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:16.464 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:16.464 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:16.464 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:16.464 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:16.464 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:16.464 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:16.464 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:16.464 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:16.464 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:16.464 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:16.464 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:16.464 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:16.464 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:16.464 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:16.464 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:16.464 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:16.464 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:16.464 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:16.464 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:16.464 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:16.464 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:16.464 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:16.464 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:16.464 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:16.464 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:16.464 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:16.464 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:16.464 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:16.464 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:16.464 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:16.464 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:16.464 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:16.464 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:16.464 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:16.464 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:16.464 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:16.465 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:16.465 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:16.465 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:16.465 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:16.465 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:16.465 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:16.465 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:16.465 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:16.465 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:16.465 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:16.465 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:16.465 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:16.465 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:16.465 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:16.465 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:16.465 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:16.465 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:16.465 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:16.465 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:16.465 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:16.465 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:16.465 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:16.465 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:16.465 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:16.465 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:16.465 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:16.465 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:16.465 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:16.465 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:16.465 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:16.465 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:16.465 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:16.465 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:16.465 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:16.465 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:16.465 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:16.465 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:16.465 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:16.465 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:16.465 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:16.465 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:16.465 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:16.465 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:16.465 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:16.465 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:16.465 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:16.465 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:16.465 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:16.465 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:16.465 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:16.465 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:16.465 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:16.465 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:16.465 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:16.465 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:16.465 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:16.465 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:16.465 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:16.465 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:16.465 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:16.465 00:19:02 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:16.465 ************************************ 00:03:16.465 END TEST build_native_dpdk 00:03:16.465 ************************************ 00:03:16.465 00:19:02 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:16.465 00:03:16.465 real 0m52.319s 00:03:16.465 user 6m12.690s 00:03:16.465 sys 0m55.624s 00:03:16.465 00:19:02 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:16.465 00:19:02 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:16.465 00:19:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:16.465 00:19:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:16.465 00:19:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:16.465 00:19:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:16.465 00:19:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:16.465 00:19:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:16.465 00:19:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:16.465 00:19:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:16.724 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:16.724 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.724 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:16.724 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:17.291 Using 'verbs' RDMA provider 00:03:30.434 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:45.311 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:45.311 Creating mk/config.mk...done. 00:03:45.311 Creating mk/cc.flags.mk...done. 00:03:45.311 Type 'make' to build. 00:03:45.311 00:19:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:45.311 00:19:29 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:45.311 00:19:29 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:45.311 00:19:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.311 ************************************ 00:03:45.311 START TEST make 00:03:45.311 ************************************ 00:03:45.311 00:19:29 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:45.311 make[1]: Nothing to be done for 'all'. 00:04:41.538 CC lib/ut_mock/mock.o 00:04:41.538 CC lib/log/log.o 00:04:41.538 CC lib/log/log_flags.o 00:04:41.538 CC lib/log/log_deprecated.o 00:04:41.538 CC lib/ut/ut.o 00:04:41.538 LIB libspdk_log.a 00:04:41.538 LIB libspdk_ut.a 00:04:41.538 LIB libspdk_ut_mock.a 00:04:41.538 SO libspdk_ut.so.2.0 00:04:41.538 SO libspdk_ut_mock.so.6.0 00:04:41.538 SO libspdk_log.so.7.0 00:04:41.538 SYMLINK libspdk_ut_mock.so 00:04:41.538 SYMLINK libspdk_ut.so 00:04:41.538 SYMLINK libspdk_log.so 00:04:41.538 CC lib/ioat/ioat.o 00:04:41.538 CC lib/util/bit_array.o 00:04:41.538 CC lib/util/base64.o 00:04:41.538 CC lib/util/crc16.o 00:04:41.538 CC lib/util/cpuset.o 00:04:41.538 CC lib/util/crc32c.o 00:04:41.538 CC lib/util/crc32.o 00:04:41.538 CC lib/dma/dma.o 00:04:41.538 CXX lib/trace_parser/trace.o 00:04:41.538 CC lib/vfio_user/host/vfio_user_pci.o 00:04:41.538 CC lib/util/crc32_ieee.o 00:04:41.538 CC lib/util/crc64.o 00:04:41.538 CC lib/util/dif.o 00:04:41.538 CC lib/vfio_user/host/vfio_user.o 00:04:41.538 CC lib/util/fd.o 00:04:41.538 LIB libspdk_dma.a 00:04:41.538 CC lib/util/fd_group.o 00:04:41.538 SO libspdk_dma.so.5.0 00:04:41.538 LIB libspdk_ioat.a 00:04:41.538 SO libspdk_ioat.so.7.0 00:04:41.538 SYMLINK libspdk_dma.so 00:04:41.538 CC lib/util/file.o 00:04:41.538 CC lib/util/hexlify.o 00:04:41.538 CC lib/util/iov.o 00:04:41.538 CC lib/util/math.o 00:04:41.538 SYMLINK libspdk_ioat.so 00:04:41.538 CC lib/util/net.o 00:04:41.538 CC lib/util/pipe.o 00:04:41.538 CC lib/util/strerror_tls.o 00:04:41.538 CC lib/util/string.o 00:04:41.538 CC lib/util/uuid.o 00:04:41.538 LIB libspdk_vfio_user.a 00:04:41.538 CC lib/util/xor.o 00:04:41.538 CC lib/util/zipf.o 00:04:41.538 SO libspdk_vfio_user.so.5.0 00:04:41.538 CC lib/util/md5.o 00:04:41.538 SYMLINK libspdk_vfio_user.so 00:04:41.538 LIB libspdk_util.a 00:04:41.538 SO libspdk_util.so.10.0 00:04:41.538 SYMLINK libspdk_util.so 00:04:41.538 LIB libspdk_trace_parser.a 00:04:41.538 SO libspdk_trace_parser.so.6.0 00:04:41.538 SYMLINK libspdk_trace_parser.so 00:04:41.538 CC lib/vmd/vmd.o 00:04:41.538 CC lib/vmd/led.o 00:04:41.538 CC lib/conf/conf.o 00:04:41.538 CC lib/rdma_provider/common.o 00:04:41.538 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:41.538 CC lib/rdma_utils/rdma_utils.o 00:04:41.538 CC lib/env_dpdk/env.o 00:04:41.538 CC lib/env_dpdk/memory.o 00:04:41.538 CC lib/json/json_parse.o 00:04:41.538 CC lib/idxd/idxd.o 00:04:41.538 CC lib/json/json_util.o 00:04:41.538 CC lib/json/json_write.o 00:04:41.538 LIB libspdk_rdma_provider.a 00:04:41.538 SO libspdk_rdma_provider.so.6.0 00:04:41.538 LIB libspdk_conf.a 00:04:41.538 SO libspdk_conf.so.6.0 00:04:41.538 CC lib/env_dpdk/pci.o 00:04:41.538 LIB libspdk_rdma_utils.a 00:04:41.538 SYMLINK libspdk_rdma_provider.so 00:04:41.538 CC lib/idxd/idxd_user.o 00:04:41.538 SO libspdk_rdma_utils.so.1.0 00:04:41.538 SYMLINK libspdk_conf.so 00:04:41.538 CC lib/idxd/idxd_kernel.o 00:04:41.538 CC lib/env_dpdk/init.o 00:04:41.538 SYMLINK libspdk_rdma_utils.so 00:04:41.538 CC lib/env_dpdk/threads.o 00:04:41.538 CC lib/env_dpdk/pci_ioat.o 00:04:41.538 LIB libspdk_json.a 00:04:41.538 CC lib/env_dpdk/pci_virtio.o 00:04:41.538 SO libspdk_json.so.6.0 00:04:41.538 CC lib/env_dpdk/pci_vmd.o 00:04:41.538 CC lib/env_dpdk/pci_idxd.o 00:04:41.538 CC lib/env_dpdk/pci_event.o 00:04:41.538 SYMLINK libspdk_json.so 00:04:41.538 CC lib/env_dpdk/sigbus_handler.o 00:04:41.538 CC lib/env_dpdk/pci_dpdk.o 00:04:41.538 LIB libspdk_vmd.a 00:04:41.538 LIB libspdk_idxd.a 00:04:41.538 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:41.538 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:41.538 SO libspdk_vmd.so.6.0 00:04:41.538 SO libspdk_idxd.so.12.1 00:04:41.538 SYMLINK libspdk_vmd.so 00:04:41.538 CC lib/jsonrpc/jsonrpc_server.o 00:04:41.538 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:41.538 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:41.538 CC lib/jsonrpc/jsonrpc_client.o 00:04:41.538 SYMLINK libspdk_idxd.so 00:04:41.538 LIB libspdk_jsonrpc.a 00:04:41.538 SO libspdk_jsonrpc.so.6.0 00:04:41.538 SYMLINK libspdk_jsonrpc.so 00:04:41.538 CC lib/rpc/rpc.o 00:04:41.538 LIB libspdk_env_dpdk.a 00:04:41.538 SO libspdk_env_dpdk.so.15.0 00:04:41.538 LIB libspdk_rpc.a 00:04:41.538 SYMLINK libspdk_env_dpdk.so 00:04:41.538 SO libspdk_rpc.so.6.0 00:04:41.538 SYMLINK libspdk_rpc.so 00:04:41.538 CC lib/notify/notify.o 00:04:41.538 CC lib/notify/notify_rpc.o 00:04:41.538 CC lib/keyring/keyring_rpc.o 00:04:41.538 CC lib/keyring/keyring.o 00:04:41.538 CC lib/trace/trace.o 00:04:41.538 CC lib/trace/trace_rpc.o 00:04:41.538 CC lib/trace/trace_flags.o 00:04:41.538 LIB libspdk_notify.a 00:04:41.538 SO libspdk_notify.so.6.0 00:04:41.538 LIB libspdk_keyring.a 00:04:41.538 SYMLINK libspdk_notify.so 00:04:41.538 SO libspdk_keyring.so.2.0 00:04:41.538 LIB libspdk_trace.a 00:04:41.538 SYMLINK libspdk_keyring.so 00:04:41.538 SO libspdk_trace.so.11.0 00:04:41.538 SYMLINK libspdk_trace.so 00:04:41.538 CC lib/sock/sock.o 00:04:41.538 CC lib/sock/sock_rpc.o 00:04:41.538 CC lib/thread/thread.o 00:04:41.538 CC lib/thread/iobuf.o 00:04:41.538 LIB libspdk_sock.a 00:04:41.538 SO libspdk_sock.so.10.0 00:04:41.538 SYMLINK libspdk_sock.so 00:04:41.538 CC lib/nvme/nvme_ctrlr.o 00:04:41.538 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:41.538 CC lib/nvme/nvme_fabric.o 00:04:41.538 CC lib/nvme/nvme_ns_cmd.o 00:04:41.538 CC lib/nvme/nvme_pcie_common.o 00:04:41.538 CC lib/nvme/nvme_pcie.o 00:04:41.538 CC lib/nvme/nvme_ns.o 00:04:41.538 CC lib/nvme/nvme_qpair.o 00:04:41.538 CC lib/nvme/nvme.o 00:04:41.538 CC lib/nvme/nvme_quirks.o 00:04:41.538 CC lib/nvme/nvme_transport.o 00:04:41.538 LIB libspdk_thread.a 00:04:41.538 CC lib/nvme/nvme_discovery.o 00:04:41.538 SO libspdk_thread.so.10.1 00:04:41.538 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:41.538 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:41.538 SYMLINK libspdk_thread.so 00:04:41.539 CC lib/nvme/nvme_tcp.o 00:04:41.539 CC lib/nvme/nvme_opal.o 00:04:41.539 CC lib/nvme/nvme_io_msg.o 00:04:41.539 CC lib/accel/accel.o 00:04:41.539 CC lib/accel/accel_rpc.o 00:04:41.797 CC lib/accel/accel_sw.o 00:04:41.797 CC lib/nvme/nvme_poll_group.o 00:04:41.797 CC lib/nvme/nvme_zns.o 00:04:41.797 CC lib/nvme/nvme_stubs.o 00:04:42.056 CC lib/blob/blobstore.o 00:04:42.056 CC lib/blob/request.o 00:04:42.056 CC lib/init/json_config.o 00:04:42.314 CC lib/virtio/virtio.o 00:04:42.314 CC lib/init/subsystem.o 00:04:42.314 CC lib/init/subsystem_rpc.o 00:04:42.573 CC lib/init/rpc.o 00:04:42.573 LIB libspdk_accel.a 00:04:42.573 CC lib/blob/zeroes.o 00:04:42.573 CC lib/virtio/virtio_vhost_user.o 00:04:42.573 SO libspdk_accel.so.16.0 00:04:42.573 CC lib/nvme/nvme_auth.o 00:04:42.573 CC lib/virtio/virtio_vfio_user.o 00:04:42.573 CC lib/virtio/virtio_pci.o 00:04:42.573 LIB libspdk_init.a 00:04:42.573 SYMLINK libspdk_accel.so 00:04:42.573 CC lib/nvme/nvme_cuse.o 00:04:42.831 SO libspdk_init.so.6.0 00:04:42.831 CC lib/nvme/nvme_rdma.o 00:04:42.831 CC lib/fsdev/fsdev.o 00:04:42.831 CC lib/blob/blob_bs_dev.o 00:04:42.831 SYMLINK libspdk_init.so 00:04:42.831 CC lib/fsdev/fsdev_io.o 00:04:43.090 LIB libspdk_virtio.a 00:04:43.090 CC lib/bdev/bdev.o 00:04:43.090 CC lib/event/app.o 00:04:43.090 SO libspdk_virtio.so.7.0 00:04:43.090 CC lib/bdev/bdev_rpc.o 00:04:43.090 SYMLINK libspdk_virtio.so 00:04:43.090 CC lib/event/reactor.o 00:04:43.348 CC lib/fsdev/fsdev_rpc.o 00:04:43.348 CC lib/bdev/bdev_zone.o 00:04:43.348 CC lib/bdev/part.o 00:04:43.607 CC lib/bdev/scsi_nvme.o 00:04:43.607 LIB libspdk_fsdev.a 00:04:43.607 SO libspdk_fsdev.so.1.0 00:04:43.607 CC lib/event/log_rpc.o 00:04:43.607 CC lib/event/app_rpc.o 00:04:43.607 SYMLINK libspdk_fsdev.so 00:04:43.607 CC lib/event/scheduler_static.o 00:04:43.866 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:43.866 LIB libspdk_event.a 00:04:43.866 SO libspdk_event.so.14.0 00:04:44.125 SYMLINK libspdk_event.so 00:04:44.125 LIB libspdk_nvme.a 00:04:44.385 SO libspdk_nvme.so.14.0 00:04:44.643 LIB libspdk_fuse_dispatcher.a 00:04:44.643 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.643 SYMLINK libspdk_nvme.so 00:04:44.643 SYMLINK libspdk_fuse_dispatcher.so 00:04:45.211 LIB libspdk_blob.a 00:04:45.211 SO libspdk_blob.so.11.0 00:04:45.470 SYMLINK libspdk_blob.so 00:04:45.729 CC lib/lvol/lvol.o 00:04:45.729 CC lib/blobfs/tree.o 00:04:45.729 CC lib/blobfs/blobfs.o 00:04:45.729 LIB libspdk_bdev.a 00:04:45.987 SO libspdk_bdev.so.16.0 00:04:45.987 SYMLINK libspdk_bdev.so 00:04:46.246 CC lib/scsi/dev.o 00:04:46.246 CC lib/nbd/nbd.o 00:04:46.246 CC lib/scsi/port.o 00:04:46.246 CC lib/scsi/lun.o 00:04:46.246 CC lib/scsi/scsi.o 00:04:46.246 CC lib/nvmf/ctrlr.o 00:04:46.246 CC lib/ublk/ublk.o 00:04:46.246 CC lib/ftl/ftl_core.o 00:04:46.503 CC lib/ftl/ftl_init.o 00:04:46.503 CC lib/nvmf/ctrlr_discovery.o 00:04:46.503 CC lib/nvmf/ctrlr_bdev.o 00:04:46.503 CC lib/scsi/scsi_bdev.o 00:04:46.762 LIB libspdk_blobfs.a 00:04:46.762 LIB libspdk_lvol.a 00:04:46.762 SO libspdk_blobfs.so.10.0 00:04:46.762 SO libspdk_lvol.so.10.0 00:04:46.762 CC lib/ftl/ftl_layout.o 00:04:46.762 CC lib/ftl/ftl_debug.o 00:04:46.762 CC lib/nbd/nbd_rpc.o 00:04:46.762 SYMLINK libspdk_lvol.so 00:04:46.762 CC lib/ftl/ftl_io.o 00:04:46.762 SYMLINK libspdk_blobfs.so 00:04:46.762 CC lib/nvmf/subsystem.o 00:04:47.021 CC lib/ublk/ublk_rpc.o 00:04:47.021 LIB libspdk_nbd.a 00:04:47.021 CC lib/ftl/ftl_sb.o 00:04:47.021 SO libspdk_nbd.so.7.0 00:04:47.021 CC lib/nvmf/nvmf.o 00:04:47.021 CC lib/ftl/ftl_l2p.o 00:04:47.021 SYMLINK libspdk_nbd.so 00:04:47.021 CC lib/ftl/ftl_l2p_flat.o 00:04:47.021 CC lib/scsi/scsi_pr.o 00:04:47.021 LIB libspdk_ublk.a 00:04:47.021 CC lib/scsi/scsi_rpc.o 00:04:47.279 SO libspdk_ublk.so.3.0 00:04:47.279 CC lib/scsi/task.o 00:04:47.279 SYMLINK libspdk_ublk.so 00:04:47.279 CC lib/nvmf/nvmf_rpc.o 00:04:47.279 CC lib/ftl/ftl_nv_cache.o 00:04:47.279 CC lib/nvmf/transport.o 00:04:47.279 CC lib/nvmf/tcp.o 00:04:47.279 CC lib/nvmf/stubs.o 00:04:47.538 CC lib/ftl/ftl_band.o 00:04:47.538 LIB libspdk_scsi.a 00:04:47.538 SO libspdk_scsi.so.9.0 00:04:47.538 SYMLINK libspdk_scsi.so 00:04:47.538 CC lib/ftl/ftl_band_ops.o 00:04:47.796 CC lib/nvmf/mdns_server.o 00:04:47.796 CC lib/ftl/ftl_writer.o 00:04:48.054 CC lib/nvmf/rdma.o 00:04:48.054 CC lib/nvmf/auth.o 00:04:48.054 CC lib/ftl/ftl_rq.o 00:04:48.054 CC lib/ftl/ftl_reloc.o 00:04:48.054 CC lib/ftl/ftl_l2p_cache.o 00:04:48.313 CC lib/ftl/ftl_p2l.o 00:04:48.313 CC lib/ftl/ftl_p2l_log.o 00:04:48.313 CC lib/ftl/mngt/ftl_mngt.o 00:04:48.313 CC lib/iscsi/conn.o 00:04:48.313 CC lib/vhost/vhost.o 00:04:48.571 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:48.571 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:48.571 CC lib/vhost/vhost_rpc.o 00:04:48.571 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:48.829 CC lib/iscsi/init_grp.o 00:04:48.829 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:48.829 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:48.829 CC lib/vhost/vhost_scsi.o 00:04:48.829 CC lib/iscsi/iscsi.o 00:04:49.087 CC lib/iscsi/param.o 00:04:49.087 CC lib/iscsi/portal_grp.o 00:04:49.087 CC lib/iscsi/tgt_node.o 00:04:49.087 CC lib/iscsi/iscsi_subsystem.o 00:04:49.087 CC lib/vhost/vhost_blk.o 00:04:49.087 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:49.345 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:49.345 CC lib/iscsi/iscsi_rpc.o 00:04:49.345 CC lib/vhost/rte_vhost_user.o 00:04:49.345 CC lib/iscsi/task.o 00:04:49.604 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:49.604 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:49.604 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:49.604 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:49.894 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:49.894 CC lib/ftl/utils/ftl_conf.o 00:04:49.894 CC lib/ftl/utils/ftl_md.o 00:04:49.894 CC lib/ftl/utils/ftl_mempool.o 00:04:49.894 CC lib/ftl/utils/ftl_bitmap.o 00:04:49.894 CC lib/ftl/utils/ftl_property.o 00:04:50.180 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:50.180 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:50.180 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:50.180 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:50.180 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:50.180 LIB libspdk_nvmf.a 00:04:50.180 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:50.439 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:50.439 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:50.439 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:50.439 LIB libspdk_iscsi.a 00:04:50.439 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:50.439 SO libspdk_nvmf.so.19.0 00:04:50.439 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:50.439 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:50.439 SO libspdk_iscsi.so.8.0 00:04:50.439 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:50.439 CC lib/ftl/base/ftl_base_dev.o 00:04:50.697 CC lib/ftl/base/ftl_base_bdev.o 00:04:50.697 CC lib/ftl/ftl_trace.o 00:04:50.697 SYMLINK libspdk_nvmf.so 00:04:50.697 LIB libspdk_vhost.a 00:04:50.697 SYMLINK libspdk_iscsi.so 00:04:50.697 SO libspdk_vhost.so.8.0 00:04:50.697 SYMLINK libspdk_vhost.so 00:04:50.955 LIB libspdk_ftl.a 00:04:51.214 SO libspdk_ftl.so.9.0 00:04:51.472 SYMLINK libspdk_ftl.so 00:04:51.730 CC module/env_dpdk/env_dpdk_rpc.o 00:04:51.730 CC module/accel/error/accel_error.o 00:04:51.730 CC module/keyring/file/keyring.o 00:04:51.730 CC module/fsdev/aio/fsdev_aio.o 00:04:51.730 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:51.730 CC module/scheduler/gscheduler/gscheduler.o 00:04:51.731 CC module/sock/posix/posix.o 00:04:51.731 CC module/blob/bdev/blob_bdev.o 00:04:51.731 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:51.731 CC module/sock/uring/uring.o 00:04:51.989 LIB libspdk_env_dpdk_rpc.a 00:04:51.989 SO libspdk_env_dpdk_rpc.so.6.0 00:04:51.989 SYMLINK libspdk_env_dpdk_rpc.so 00:04:51.989 CC module/keyring/file/keyring_rpc.o 00:04:51.989 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:51.989 LIB libspdk_scheduler_gscheduler.a 00:04:51.989 LIB libspdk_scheduler_dpdk_governor.a 00:04:51.989 SO libspdk_scheduler_gscheduler.so.4.0 00:04:51.989 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:51.989 CC module/accel/error/accel_error_rpc.o 00:04:51.989 LIB libspdk_scheduler_dynamic.a 00:04:51.989 SO libspdk_scheduler_dynamic.so.4.0 00:04:51.989 SYMLINK libspdk_scheduler_gscheduler.so 00:04:51.989 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:51.989 CC module/fsdev/aio/linux_aio_mgr.o 00:04:52.248 LIB libspdk_keyring_file.a 00:04:52.248 LIB libspdk_blob_bdev.a 00:04:52.248 SYMLINK libspdk_scheduler_dynamic.so 00:04:52.248 SO libspdk_blob_bdev.so.11.0 00:04:52.248 SO libspdk_keyring_file.so.2.0 00:04:52.248 LIB libspdk_accel_error.a 00:04:52.248 SYMLINK libspdk_blob_bdev.so 00:04:52.248 SO libspdk_accel_error.so.2.0 00:04:52.248 SYMLINK libspdk_keyring_file.so 00:04:52.248 SYMLINK libspdk_accel_error.so 00:04:52.248 CC module/accel/dsa/accel_dsa.o 00:04:52.248 CC module/accel/dsa/accel_dsa_rpc.o 00:04:52.248 CC module/accel/ioat/accel_ioat.o 00:04:52.507 CC module/keyring/linux/keyring.o 00:04:52.507 CC module/accel/iaa/accel_iaa.o 00:04:52.507 LIB libspdk_fsdev_aio.a 00:04:52.507 CC module/accel/iaa/accel_iaa_rpc.o 00:04:52.507 CC module/bdev/delay/vbdev_delay.o 00:04:52.507 SO libspdk_fsdev_aio.so.1.0 00:04:52.507 CC module/blobfs/bdev/blobfs_bdev.o 00:04:52.507 CC module/accel/ioat/accel_ioat_rpc.o 00:04:52.507 CC module/keyring/linux/keyring_rpc.o 00:04:52.507 LIB libspdk_sock_uring.a 00:04:52.507 LIB libspdk_sock_posix.a 00:04:52.507 SO libspdk_sock_uring.so.5.0 00:04:52.507 SYMLINK libspdk_fsdev_aio.so 00:04:52.507 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:52.766 SO libspdk_sock_posix.so.6.0 00:04:52.766 LIB libspdk_accel_dsa.a 00:04:52.766 SYMLINK libspdk_sock_uring.so 00:04:52.766 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:52.766 SO libspdk_accel_dsa.so.5.0 00:04:52.766 LIB libspdk_accel_ioat.a 00:04:52.766 LIB libspdk_accel_iaa.a 00:04:52.766 LIB libspdk_keyring_linux.a 00:04:52.766 SYMLINK libspdk_sock_posix.so 00:04:52.766 SO libspdk_keyring_linux.so.1.0 00:04:52.766 SO libspdk_accel_iaa.so.3.0 00:04:52.766 SO libspdk_accel_ioat.so.6.0 00:04:52.766 SYMLINK libspdk_accel_dsa.so 00:04:52.766 SYMLINK libspdk_keyring_linux.so 00:04:52.766 SYMLINK libspdk_accel_iaa.so 00:04:52.766 SYMLINK libspdk_accel_ioat.so 00:04:52.766 LIB libspdk_blobfs_bdev.a 00:04:52.766 CC module/bdev/error/vbdev_error.o 00:04:52.766 SO libspdk_blobfs_bdev.so.6.0 00:04:53.025 CC module/bdev/gpt/gpt.o 00:04:53.025 SYMLINK libspdk_blobfs_bdev.so 00:04:53.025 LIB libspdk_bdev_delay.a 00:04:53.025 SO libspdk_bdev_delay.so.6.0 00:04:53.025 CC module/bdev/lvol/vbdev_lvol.o 00:04:53.025 CC module/bdev/malloc/bdev_malloc.o 00:04:53.025 CC module/bdev/null/bdev_null.o 00:04:53.025 CC module/bdev/nvme/bdev_nvme.o 00:04:53.025 CC module/bdev/passthru/vbdev_passthru.o 00:04:53.025 SYMLINK libspdk_bdev_delay.so 00:04:53.025 CC module/bdev/raid/bdev_raid.o 00:04:53.025 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:53.025 CC module/bdev/split/vbdev_split.o 00:04:53.025 CC module/bdev/gpt/vbdev_gpt.o 00:04:53.025 CC module/bdev/error/vbdev_error_rpc.o 00:04:53.284 CC module/bdev/raid/bdev_raid_rpc.o 00:04:53.284 CC module/bdev/null/bdev_null_rpc.o 00:04:53.284 LIB libspdk_bdev_error.a 00:04:53.284 LIB libspdk_bdev_passthru.a 00:04:53.284 CC module/bdev/split/vbdev_split_rpc.o 00:04:53.284 SO libspdk_bdev_error.so.6.0 00:04:53.284 SO libspdk_bdev_passthru.so.6.0 00:04:53.284 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:53.284 LIB libspdk_bdev_gpt.a 00:04:53.542 SYMLINK libspdk_bdev_error.so 00:04:53.542 SO libspdk_bdev_gpt.so.6.0 00:04:53.542 SYMLINK libspdk_bdev_passthru.so 00:04:53.542 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:53.542 LIB libspdk_bdev_null.a 00:04:53.542 CC module/bdev/raid/bdev_raid_sb.o 00:04:53.542 SO libspdk_bdev_null.so.6.0 00:04:53.542 SYMLINK libspdk_bdev_gpt.so 00:04:53.542 CC module/bdev/raid/raid0.o 00:04:53.542 CC module/bdev/raid/raid1.o 00:04:53.542 LIB libspdk_bdev_split.a 00:04:53.542 SO libspdk_bdev_split.so.6.0 00:04:53.542 SYMLINK libspdk_bdev_null.so 00:04:53.542 LIB libspdk_bdev_malloc.a 00:04:53.542 SO libspdk_bdev_malloc.so.6.0 00:04:53.542 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:53.542 SYMLINK libspdk_bdev_split.so 00:04:53.542 CC module/bdev/raid/concat.o 00:04:53.800 SYMLINK libspdk_bdev_malloc.so 00:04:53.800 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:53.800 CC module/bdev/uring/bdev_uring.o 00:04:53.800 CC module/bdev/uring/bdev_uring_rpc.o 00:04:53.800 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:53.800 LIB libspdk_bdev_lvol.a 00:04:53.800 SO libspdk_bdev_lvol.so.6.0 00:04:54.059 SYMLINK libspdk_bdev_lvol.so 00:04:54.059 CC module/bdev/aio/bdev_aio.o 00:04:54.059 CC module/bdev/nvme/nvme_rpc.o 00:04:54.059 CC module/bdev/nvme/bdev_mdns_client.o 00:04:54.059 LIB libspdk_bdev_zone_block.a 00:04:54.059 SO libspdk_bdev_zone_block.so.6.0 00:04:54.059 LIB libspdk_bdev_raid.a 00:04:54.059 CC module/bdev/iscsi/bdev_iscsi.o 00:04:54.059 CC module/bdev/ftl/bdev_ftl.o 00:04:54.059 SYMLINK libspdk_bdev_zone_block.so 00:04:54.059 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:54.059 SO libspdk_bdev_raid.so.6.0 00:04:54.059 LIB libspdk_bdev_uring.a 00:04:54.317 SO libspdk_bdev_uring.so.6.0 00:04:54.317 CC module/bdev/aio/bdev_aio_rpc.o 00:04:54.317 SYMLINK libspdk_bdev_raid.so 00:04:54.317 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:54.317 SYMLINK libspdk_bdev_uring.so 00:04:54.317 CC module/bdev/nvme/vbdev_opal.o 00:04:54.317 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:54.317 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:54.575 LIB libspdk_bdev_ftl.a 00:04:54.575 LIB libspdk_bdev_aio.a 00:04:54.575 SO libspdk_bdev_ftl.so.6.0 00:04:54.575 SO libspdk_bdev_aio.so.6.0 00:04:54.575 LIB libspdk_bdev_iscsi.a 00:04:54.575 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:54.575 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:54.575 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:54.575 SYMLINK libspdk_bdev_ftl.so 00:04:54.575 SO libspdk_bdev_iscsi.so.6.0 00:04:54.575 SYMLINK libspdk_bdev_aio.so 00:04:54.575 SYMLINK libspdk_bdev_iscsi.so 00:04:55.141 LIB libspdk_bdev_virtio.a 00:04:55.141 SO libspdk_bdev_virtio.so.6.0 00:04:55.141 SYMLINK libspdk_bdev_virtio.so 00:04:55.399 LIB libspdk_bdev_nvme.a 00:04:55.399 SO libspdk_bdev_nvme.so.7.0 00:04:55.657 SYMLINK libspdk_bdev_nvme.so 00:04:56.224 CC module/event/subsystems/vmd/vmd.o 00:04:56.224 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:56.224 CC module/event/subsystems/keyring/keyring.o 00:04:56.224 CC module/event/subsystems/iobuf/iobuf.o 00:04:56.224 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:56.224 CC module/event/subsystems/fsdev/fsdev.o 00:04:56.224 CC module/event/subsystems/scheduler/scheduler.o 00:04:56.224 CC module/event/subsystems/sock/sock.o 00:04:56.224 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:56.224 LIB libspdk_event_keyring.a 00:04:56.224 LIB libspdk_event_fsdev.a 00:04:56.224 LIB libspdk_event_sock.a 00:04:56.224 LIB libspdk_event_iobuf.a 00:04:56.224 SO libspdk_event_keyring.so.1.0 00:04:56.224 SO libspdk_event_fsdev.so.1.0 00:04:56.224 SO libspdk_event_sock.so.5.0 00:04:56.224 LIB libspdk_event_vhost_blk.a 00:04:56.224 SO libspdk_event_iobuf.so.3.0 00:04:56.224 LIB libspdk_event_vmd.a 00:04:56.224 SO libspdk_event_vhost_blk.so.3.0 00:04:56.224 SYMLINK libspdk_event_sock.so 00:04:56.224 SYMLINK libspdk_event_keyring.so 00:04:56.224 SYMLINK libspdk_event_fsdev.so 00:04:56.224 LIB libspdk_event_scheduler.a 00:04:56.224 SO libspdk_event_vmd.so.6.0 00:04:56.224 SYMLINK libspdk_event_iobuf.so 00:04:56.224 SO libspdk_event_scheduler.so.4.0 00:04:56.482 SYMLINK libspdk_event_vhost_blk.so 00:04:56.482 SYMLINK libspdk_event_vmd.so 00:04:56.482 SYMLINK libspdk_event_scheduler.so 00:04:56.482 CC module/event/subsystems/accel/accel.o 00:04:56.741 LIB libspdk_event_accel.a 00:04:56.741 SO libspdk_event_accel.so.6.0 00:04:56.741 SYMLINK libspdk_event_accel.so 00:04:57.309 CC module/event/subsystems/bdev/bdev.o 00:04:57.309 LIB libspdk_event_bdev.a 00:04:57.309 SO libspdk_event_bdev.so.6.0 00:04:57.309 SYMLINK libspdk_event_bdev.so 00:04:57.567 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:57.567 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:57.567 CC module/event/subsystems/scsi/scsi.o 00:04:57.567 CC module/event/subsystems/ublk/ublk.o 00:04:57.567 CC module/event/subsystems/nbd/nbd.o 00:04:57.825 LIB libspdk_event_ublk.a 00:04:57.825 LIB libspdk_event_nbd.a 00:04:57.825 LIB libspdk_event_scsi.a 00:04:57.825 SO libspdk_event_ublk.so.3.0 00:04:57.825 SO libspdk_event_nbd.so.6.0 00:04:57.825 SO libspdk_event_scsi.so.6.0 00:04:57.825 SYMLINK libspdk_event_scsi.so 00:04:57.825 SYMLINK libspdk_event_ublk.so 00:04:57.825 SYMLINK libspdk_event_nbd.so 00:04:57.825 LIB libspdk_event_nvmf.a 00:04:58.083 SO libspdk_event_nvmf.so.6.0 00:04:58.083 SYMLINK libspdk_event_nvmf.so 00:04:58.083 CC module/event/subsystems/iscsi/iscsi.o 00:04:58.083 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:58.342 LIB libspdk_event_iscsi.a 00:04:58.342 LIB libspdk_event_vhost_scsi.a 00:04:58.342 SO libspdk_event_iscsi.so.6.0 00:04:58.342 SO libspdk_event_vhost_scsi.so.3.0 00:04:58.342 SYMLINK libspdk_event_iscsi.so 00:04:58.342 SYMLINK libspdk_event_vhost_scsi.so 00:04:58.601 SO libspdk.so.6.0 00:04:58.601 SYMLINK libspdk.so 00:04:58.859 CC test/rpc_client/rpc_client_test.o 00:04:58.859 CXX app/trace/trace.o 00:04:58.859 CC app/trace_record/trace_record.o 00:04:58.859 TEST_HEADER include/spdk/accel.h 00:04:58.859 TEST_HEADER include/spdk/accel_module.h 00:04:58.859 TEST_HEADER include/spdk/assert.h 00:04:58.859 TEST_HEADER include/spdk/barrier.h 00:04:58.859 TEST_HEADER include/spdk/base64.h 00:04:58.859 TEST_HEADER include/spdk/bdev.h 00:04:58.859 TEST_HEADER include/spdk/bdev_module.h 00:04:58.859 TEST_HEADER include/spdk/bdev_zone.h 00:04:58.859 TEST_HEADER include/spdk/bit_array.h 00:04:58.859 TEST_HEADER include/spdk/bit_pool.h 00:04:58.859 TEST_HEADER include/spdk/blob_bdev.h 00:04:58.859 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:58.859 TEST_HEADER include/spdk/blobfs.h 00:04:58.859 TEST_HEADER include/spdk/blob.h 00:04:58.859 TEST_HEADER include/spdk/conf.h 00:04:58.859 TEST_HEADER include/spdk/config.h 00:04:58.859 TEST_HEADER include/spdk/cpuset.h 00:04:58.859 TEST_HEADER include/spdk/crc16.h 00:04:58.859 TEST_HEADER include/spdk/crc32.h 00:04:58.859 TEST_HEADER include/spdk/crc64.h 00:04:58.859 TEST_HEADER include/spdk/dif.h 00:04:58.859 TEST_HEADER include/spdk/dma.h 00:04:58.859 TEST_HEADER include/spdk/endian.h 00:04:58.859 TEST_HEADER include/spdk/env_dpdk.h 00:04:58.859 TEST_HEADER include/spdk/env.h 00:04:58.859 TEST_HEADER include/spdk/event.h 00:04:58.859 CC test/thread/poller_perf/poller_perf.o 00:04:58.859 TEST_HEADER include/spdk/fd_group.h 00:04:58.859 TEST_HEADER include/spdk/fd.h 00:04:58.859 TEST_HEADER include/spdk/file.h 00:04:58.859 CC examples/ioat/perf/perf.o 00:04:58.859 TEST_HEADER include/spdk/fsdev.h 00:04:58.859 TEST_HEADER include/spdk/fsdev_module.h 00:04:58.859 TEST_HEADER include/spdk/ftl.h 00:04:58.859 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:58.859 TEST_HEADER include/spdk/gpt_spec.h 00:04:58.859 TEST_HEADER include/spdk/hexlify.h 00:04:58.859 TEST_HEADER include/spdk/histogram_data.h 00:04:58.859 TEST_HEADER include/spdk/idxd.h 00:04:58.859 TEST_HEADER include/spdk/idxd_spec.h 00:04:58.859 CC examples/util/zipf/zipf.o 00:04:58.859 TEST_HEADER include/spdk/init.h 00:04:59.130 TEST_HEADER include/spdk/ioat.h 00:04:59.130 TEST_HEADER include/spdk/ioat_spec.h 00:04:59.130 TEST_HEADER include/spdk/iscsi_spec.h 00:04:59.130 CC test/app/bdev_svc/bdev_svc.o 00:04:59.130 CC test/dma/test_dma/test_dma.o 00:04:59.130 TEST_HEADER include/spdk/json.h 00:04:59.130 TEST_HEADER include/spdk/jsonrpc.h 00:04:59.130 TEST_HEADER include/spdk/keyring.h 00:04:59.130 TEST_HEADER include/spdk/keyring_module.h 00:04:59.130 TEST_HEADER include/spdk/likely.h 00:04:59.130 TEST_HEADER include/spdk/log.h 00:04:59.130 TEST_HEADER include/spdk/lvol.h 00:04:59.130 TEST_HEADER include/spdk/md5.h 00:04:59.130 TEST_HEADER include/spdk/memory.h 00:04:59.130 TEST_HEADER include/spdk/mmio.h 00:04:59.130 TEST_HEADER include/spdk/nbd.h 00:04:59.130 TEST_HEADER include/spdk/net.h 00:04:59.130 TEST_HEADER include/spdk/notify.h 00:04:59.130 TEST_HEADER include/spdk/nvme.h 00:04:59.130 TEST_HEADER include/spdk/nvme_intel.h 00:04:59.130 LINK rpc_client_test 00:04:59.130 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:59.130 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:59.130 CC test/env/mem_callbacks/mem_callbacks.o 00:04:59.130 TEST_HEADER include/spdk/nvme_spec.h 00:04:59.130 TEST_HEADER include/spdk/nvme_zns.h 00:04:59.130 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:59.130 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:59.130 TEST_HEADER include/spdk/nvmf.h 00:04:59.130 TEST_HEADER include/spdk/nvmf_spec.h 00:04:59.130 TEST_HEADER include/spdk/nvmf_transport.h 00:04:59.130 TEST_HEADER include/spdk/opal.h 00:04:59.130 TEST_HEADER include/spdk/opal_spec.h 00:04:59.130 TEST_HEADER include/spdk/pci_ids.h 00:04:59.130 TEST_HEADER include/spdk/pipe.h 00:04:59.130 TEST_HEADER include/spdk/queue.h 00:04:59.130 TEST_HEADER include/spdk/reduce.h 00:04:59.130 TEST_HEADER include/spdk/rpc.h 00:04:59.130 TEST_HEADER include/spdk/scheduler.h 00:04:59.130 TEST_HEADER include/spdk/scsi.h 00:04:59.130 TEST_HEADER include/spdk/scsi_spec.h 00:04:59.130 TEST_HEADER include/spdk/sock.h 00:04:59.130 LINK poller_perf 00:04:59.130 TEST_HEADER include/spdk/stdinc.h 00:04:59.130 TEST_HEADER include/spdk/string.h 00:04:59.130 TEST_HEADER include/spdk/thread.h 00:04:59.130 TEST_HEADER include/spdk/trace.h 00:04:59.130 TEST_HEADER include/spdk/trace_parser.h 00:04:59.130 TEST_HEADER include/spdk/tree.h 00:04:59.130 LINK spdk_trace_record 00:04:59.130 TEST_HEADER include/spdk/ublk.h 00:04:59.130 TEST_HEADER include/spdk/util.h 00:04:59.130 TEST_HEADER include/spdk/uuid.h 00:04:59.130 TEST_HEADER include/spdk/version.h 00:04:59.130 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:59.130 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:59.130 TEST_HEADER include/spdk/vhost.h 00:04:59.130 TEST_HEADER include/spdk/vmd.h 00:04:59.130 TEST_HEADER include/spdk/xor.h 00:04:59.130 TEST_HEADER include/spdk/zipf.h 00:04:59.130 CXX test/cpp_headers/accel.o 00:04:59.130 LINK ioat_perf 00:04:59.130 LINK zipf 00:04:59.415 LINK bdev_svc 00:04:59.415 LINK mem_callbacks 00:04:59.415 LINK spdk_trace 00:04:59.415 CXX test/cpp_headers/accel_module.o 00:04:59.415 CC examples/ioat/verify/verify.o 00:04:59.415 CC app/nvmf_tgt/nvmf_main.o 00:04:59.673 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:59.673 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:59.673 CC test/env/vtophys/vtophys.o 00:04:59.673 LINK test_dma 00:04:59.673 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:59.673 CC app/iscsi_tgt/iscsi_tgt.o 00:04:59.673 CXX test/cpp_headers/assert.o 00:04:59.673 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:59.673 LINK nvmf_tgt 00:04:59.673 LINK verify 00:04:59.673 LINK vtophys 00:04:59.673 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:59.931 LINK iscsi_tgt 00:04:59.931 CXX test/cpp_headers/barrier.o 00:04:59.931 LINK env_dpdk_post_init 00:04:59.931 CC test/env/memory/memory_ut.o 00:04:59.931 LINK nvme_fuzz 00:04:59.931 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:59.931 CC test/env/pci/pci_ut.o 00:04:59.931 CXX test/cpp_headers/base64.o 00:05:00.188 CC examples/thread/thread/thread_ex.o 00:05:00.188 CC test/app/histogram_perf/histogram_perf.o 00:05:00.188 LINK vhost_fuzz 00:05:00.188 CXX test/cpp_headers/bdev.o 00:05:00.188 LINK interrupt_tgt 00:05:00.188 CC app/spdk_tgt/spdk_tgt.o 00:05:00.188 CC test/app/jsoncat/jsoncat.o 00:05:00.446 LINK histogram_perf 00:05:00.446 CXX test/cpp_headers/bdev_module.o 00:05:00.446 LINK pci_ut 00:05:00.446 LINK thread 00:05:00.446 LINK jsoncat 00:05:00.446 LINK spdk_tgt 00:05:00.446 CC app/spdk_lspci/spdk_lspci.o 00:05:00.705 CXX test/cpp_headers/bdev_zone.o 00:05:00.705 CC test/event/event_perf/event_perf.o 00:05:00.705 CC test/nvme/aer/aer.o 00:05:00.705 LINK memory_ut 00:05:00.705 LINK spdk_lspci 00:05:00.705 CC test/nvme/reset/reset.o 00:05:00.705 CC test/nvme/sgl/sgl.o 00:05:00.705 CXX test/cpp_headers/bit_array.o 00:05:00.705 CC test/nvme/e2edp/nvme_dp.o 00:05:00.963 LINK event_perf 00:05:00.963 CC examples/sock/hello_world/hello_sock.o 00:05:00.963 CXX test/cpp_headers/bit_pool.o 00:05:00.963 CC app/spdk_nvme_perf/perf.o 00:05:00.963 LINK aer 00:05:00.963 LINK reset 00:05:00.963 CC test/nvme/overhead/overhead.o 00:05:00.963 CC test/event/reactor/reactor.o 00:05:00.963 LINK sgl 00:05:01.300 LINK nvme_dp 00:05:01.300 CXX test/cpp_headers/blob_bdev.o 00:05:01.300 LINK hello_sock 00:05:01.300 CXX test/cpp_headers/blobfs_bdev.o 00:05:01.300 CXX test/cpp_headers/blobfs.o 00:05:01.300 LINK reactor 00:05:01.300 LINK iscsi_fuzz 00:05:01.300 CC test/nvme/err_injection/err_injection.o 00:05:01.300 LINK overhead 00:05:01.300 CXX test/cpp_headers/blob.o 00:05:01.558 CC test/nvme/startup/startup.o 00:05:01.558 CC test/app/stub/stub.o 00:05:01.558 CC examples/vmd/lsvmd/lsvmd.o 00:05:01.558 CC test/event/reactor_perf/reactor_perf.o 00:05:01.558 CC examples/idxd/perf/perf.o 00:05:01.558 CXX test/cpp_headers/conf.o 00:05:01.558 LINK err_injection 00:05:01.558 LINK startup 00:05:01.558 CC test/event/app_repeat/app_repeat.o 00:05:01.558 LINK lsvmd 00:05:01.558 LINK reactor_perf 00:05:01.816 LINK stub 00:05:01.816 CC test/event/scheduler/scheduler.o 00:05:01.816 CXX test/cpp_headers/config.o 00:05:01.816 LINK app_repeat 00:05:01.816 CXX test/cpp_headers/cpuset.o 00:05:01.816 CC app/spdk_nvme_identify/identify.o 00:05:01.816 LINK idxd_perf 00:05:01.816 CC test/nvme/reserve/reserve.o 00:05:01.816 CC test/nvme/simple_copy/simple_copy.o 00:05:02.074 LINK spdk_nvme_perf 00:05:02.074 CC examples/vmd/led/led.o 00:05:02.074 LINK scheduler 00:05:02.074 CC test/nvme/connect_stress/connect_stress.o 00:05:02.074 CXX test/cpp_headers/crc16.o 00:05:02.074 CC test/nvme/boot_partition/boot_partition.o 00:05:02.074 LINK led 00:05:02.074 CC test/nvme/compliance/nvme_compliance.o 00:05:02.074 LINK simple_copy 00:05:02.333 CXX test/cpp_headers/crc32.o 00:05:02.333 LINK connect_stress 00:05:02.333 LINK reserve 00:05:02.333 CC test/nvme/fused_ordering/fused_ordering.o 00:05:02.333 LINK boot_partition 00:05:02.333 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:02.333 CXX test/cpp_headers/crc64.o 00:05:02.591 CC app/spdk_nvme_discover/discovery_aer.o 00:05:02.591 CC test/nvme/fdp/fdp.o 00:05:02.591 LINK nvme_compliance 00:05:02.591 LINK fused_ordering 00:05:02.591 LINK doorbell_aers 00:05:02.591 CXX test/cpp_headers/dif.o 00:05:02.591 CC test/accel/dif/dif.o 00:05:02.591 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:02.591 CC app/spdk_top/spdk_top.o 00:05:02.848 LINK spdk_nvme_discover 00:05:02.848 LINK spdk_nvme_identify 00:05:02.848 CXX test/cpp_headers/dma.o 00:05:02.848 CC test/nvme/cuse/cuse.o 00:05:02.848 LINK fdp 00:05:02.848 CC examples/accel/perf/accel_perf.o 00:05:02.848 CXX test/cpp_headers/endian.o 00:05:02.848 LINK hello_fsdev 00:05:03.105 CC test/blobfs/mkfs/mkfs.o 00:05:03.105 CC app/vhost/vhost.o 00:05:03.105 CXX test/cpp_headers/env_dpdk.o 00:05:03.105 CXX test/cpp_headers/env.o 00:05:03.105 CC app/spdk_dd/spdk_dd.o 00:05:03.105 LINK mkfs 00:05:03.105 CC test/lvol/esnap/esnap.o 00:05:03.363 LINK vhost 00:05:03.363 LINK dif 00:05:03.363 CXX test/cpp_headers/event.o 00:05:03.363 LINK accel_perf 00:05:03.363 CC app/fio/nvme/fio_plugin.o 00:05:03.621 CXX test/cpp_headers/fd_group.o 00:05:03.621 CXX test/cpp_headers/fd.o 00:05:03.622 CC app/fio/bdev/fio_plugin.o 00:05:03.622 LINK spdk_top 00:05:03.622 LINK spdk_dd 00:05:03.622 CXX test/cpp_headers/file.o 00:05:03.879 CC examples/blob/hello_world/hello_blob.o 00:05:03.879 CC test/bdev/bdevio/bdevio.o 00:05:03.879 CC examples/nvme/hello_world/hello_world.o 00:05:03.879 CXX test/cpp_headers/fsdev.o 00:05:03.879 CC examples/bdev/hello_world/hello_bdev.o 00:05:04.137 LINK spdk_nvme 00:05:04.137 CC examples/bdev/bdevperf/bdevperf.o 00:05:04.137 LINK hello_blob 00:05:04.137 LINK spdk_bdev 00:05:04.137 CXX test/cpp_headers/fsdev_module.o 00:05:04.137 LINK hello_world 00:05:04.137 LINK cuse 00:05:04.137 CC examples/nvme/reconnect/reconnect.o 00:05:04.137 LINK bdevio 00:05:04.396 LINK hello_bdev 00:05:04.396 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:04.396 CXX test/cpp_headers/ftl.o 00:05:04.396 CC examples/nvme/arbitration/arbitration.o 00:05:04.396 CC examples/blob/cli/blobcli.o 00:05:04.396 CXX test/cpp_headers/fuse_dispatcher.o 00:05:04.396 CC examples/nvme/hotplug/hotplug.o 00:05:04.396 CXX test/cpp_headers/gpt_spec.o 00:05:04.655 CXX test/cpp_headers/hexlify.o 00:05:04.655 LINK reconnect 00:05:04.655 CXX test/cpp_headers/histogram_data.o 00:05:04.655 CXX test/cpp_headers/idxd.o 00:05:04.655 LINK hotplug 00:05:04.913 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:04.913 LINK arbitration 00:05:04.913 LINK nvme_manage 00:05:04.913 CC examples/nvme/abort/abort.o 00:05:04.913 CXX test/cpp_headers/idxd_spec.o 00:05:04.913 CXX test/cpp_headers/init.o 00:05:04.913 LINK bdevperf 00:05:04.913 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:04.913 LINK blobcli 00:05:04.913 CXX test/cpp_headers/ioat.o 00:05:04.913 CXX test/cpp_headers/ioat_spec.o 00:05:04.913 LINK cmb_copy 00:05:05.172 CXX test/cpp_headers/iscsi_spec.o 00:05:05.172 CXX test/cpp_headers/json.o 00:05:05.172 CXX test/cpp_headers/jsonrpc.o 00:05:05.172 LINK pmr_persistence 00:05:05.172 CXX test/cpp_headers/keyring.o 00:05:05.172 CXX test/cpp_headers/keyring_module.o 00:05:05.172 CXX test/cpp_headers/likely.o 00:05:05.172 CXX test/cpp_headers/log.o 00:05:05.172 CXX test/cpp_headers/lvol.o 00:05:05.430 LINK abort 00:05:05.430 CXX test/cpp_headers/md5.o 00:05:05.430 CXX test/cpp_headers/memory.o 00:05:05.430 CXX test/cpp_headers/mmio.o 00:05:05.430 CXX test/cpp_headers/nbd.o 00:05:05.430 CXX test/cpp_headers/net.o 00:05:05.430 CXX test/cpp_headers/notify.o 00:05:05.430 CXX test/cpp_headers/nvme.o 00:05:05.430 CXX test/cpp_headers/nvme_intel.o 00:05:05.430 CXX test/cpp_headers/nvme_ocssd.o 00:05:05.430 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:05.430 CXX test/cpp_headers/nvme_spec.o 00:05:05.689 CXX test/cpp_headers/nvme_zns.o 00:05:05.689 CXX test/cpp_headers/nvmf_cmd.o 00:05:05.689 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:05.689 CXX test/cpp_headers/nvmf.o 00:05:05.689 CXX test/cpp_headers/nvmf_spec.o 00:05:05.689 CXX test/cpp_headers/nvmf_transport.o 00:05:05.689 CC examples/nvmf/nvmf/nvmf.o 00:05:05.689 CXX test/cpp_headers/opal.o 00:05:05.689 CXX test/cpp_headers/opal_spec.o 00:05:05.689 CXX test/cpp_headers/pci_ids.o 00:05:05.689 CXX test/cpp_headers/pipe.o 00:05:05.689 CXX test/cpp_headers/queue.o 00:05:05.689 CXX test/cpp_headers/reduce.o 00:05:05.689 CXX test/cpp_headers/rpc.o 00:05:05.689 CXX test/cpp_headers/scheduler.o 00:05:05.948 CXX test/cpp_headers/scsi.o 00:05:05.948 CXX test/cpp_headers/scsi_spec.o 00:05:05.948 CXX test/cpp_headers/sock.o 00:05:05.948 CXX test/cpp_headers/stdinc.o 00:05:05.948 CXX test/cpp_headers/string.o 00:05:05.948 CXX test/cpp_headers/thread.o 00:05:05.948 CXX test/cpp_headers/trace.o 00:05:05.948 LINK nvmf 00:05:05.948 CXX test/cpp_headers/trace_parser.o 00:05:05.948 CXX test/cpp_headers/tree.o 00:05:06.208 CXX test/cpp_headers/ublk.o 00:05:06.208 CXX test/cpp_headers/util.o 00:05:06.208 CXX test/cpp_headers/uuid.o 00:05:06.208 CXX test/cpp_headers/version.o 00:05:06.208 CXX test/cpp_headers/vfio_user_pci.o 00:05:06.208 CXX test/cpp_headers/vfio_user_spec.o 00:05:06.208 CXX test/cpp_headers/vhost.o 00:05:06.208 CXX test/cpp_headers/vmd.o 00:05:06.208 CXX test/cpp_headers/xor.o 00:05:06.208 CXX test/cpp_headers/zipf.o 00:05:08.743 LINK esnap 00:05:08.743 00:05:08.743 real 1m24.771s 00:05:08.743 user 6m59.227s 00:05:08.743 sys 1m8.579s 00:05:08.743 ************************************ 00:05:08.743 END TEST make 00:05:08.743 ************************************ 00:05:08.743 00:20:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:08.743 00:20:54 make -- common/autotest_common.sh@10 -- $ set +x 00:05:08.743 00:20:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:08.743 00:20:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:08.743 00:20:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:08.743 00:20:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.743 00:20:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:08.743 00:20:54 -- pm/common@44 -- $ pid=6024 00:05:08.743 00:20:54 -- pm/common@50 -- $ kill -TERM 6024 00:05:08.743 00:20:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.743 00:20:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:08.743 00:20:54 -- pm/common@44 -- $ pid=6026 00:05:08.743 00:20:54 -- pm/common@50 -- $ kill -TERM 6026 00:05:08.743 00:20:54 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.743 00:20:54 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.743 00:20:54 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:09.002 00:20:54 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:09.002 00:20:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.002 00:20:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.002 00:20:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.002 00:20:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.002 00:20:54 -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.002 00:20:54 -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.002 00:20:54 -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.002 00:20:54 -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.002 00:20:54 -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.002 00:20:54 -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.002 00:20:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.002 00:20:54 -- scripts/common.sh@344 -- # case "$op" in 00:05:09.002 00:20:54 -- scripts/common.sh@345 -- # : 1 00:05:09.002 00:20:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.002 00:20:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.002 00:20:54 -- scripts/common.sh@365 -- # decimal 1 00:05:09.002 00:20:54 -- scripts/common.sh@353 -- # local d=1 00:05:09.002 00:20:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.002 00:20:54 -- scripts/common.sh@355 -- # echo 1 00:05:09.002 00:20:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.002 00:20:54 -- scripts/common.sh@366 -- # decimal 2 00:05:09.002 00:20:54 -- scripts/common.sh@353 -- # local d=2 00:05:09.002 00:20:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.002 00:20:54 -- scripts/common.sh@355 -- # echo 2 00:05:09.002 00:20:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.002 00:20:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.002 00:20:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.002 00:20:54 -- scripts/common.sh@368 -- # return 0 00:05:09.002 00:20:54 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.002 00:20:54 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:09.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.002 --rc genhtml_branch_coverage=1 00:05:09.002 --rc genhtml_function_coverage=1 00:05:09.002 --rc genhtml_legend=1 00:05:09.002 --rc geninfo_all_blocks=1 00:05:09.002 --rc geninfo_unexecuted_blocks=1 00:05:09.002 00:05:09.002 ' 00:05:09.002 00:20:54 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:09.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.002 --rc genhtml_branch_coverage=1 00:05:09.002 --rc genhtml_function_coverage=1 00:05:09.002 --rc genhtml_legend=1 00:05:09.002 --rc geninfo_all_blocks=1 00:05:09.002 --rc geninfo_unexecuted_blocks=1 00:05:09.002 00:05:09.002 ' 00:05:09.002 00:20:54 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:09.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.002 --rc genhtml_branch_coverage=1 00:05:09.002 --rc genhtml_function_coverage=1 00:05:09.002 --rc genhtml_legend=1 00:05:09.002 --rc geninfo_all_blocks=1 00:05:09.003 --rc geninfo_unexecuted_blocks=1 00:05:09.003 00:05:09.003 ' 00:05:09.003 00:20:54 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:09.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.003 --rc genhtml_branch_coverage=1 00:05:09.003 --rc genhtml_function_coverage=1 00:05:09.003 --rc genhtml_legend=1 00:05:09.003 --rc geninfo_all_blocks=1 00:05:09.003 --rc geninfo_unexecuted_blocks=1 00:05:09.003 00:05:09.003 ' 00:05:09.003 00:20:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.003 00:20:54 -- nvmf/common.sh@7 -- # uname -s 00:05:09.003 00:20:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.003 00:20:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.003 00:20:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.003 00:20:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.003 00:20:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.003 00:20:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.003 00:20:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.003 00:20:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.003 00:20:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.003 00:20:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.003 00:20:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:05:09.003 00:20:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:05:09.003 00:20:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.003 00:20:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.003 00:20:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:09.003 00:20:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.003 00:20:54 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.003 00:20:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.003 00:20:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.003 00:20:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.003 00:20:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.003 00:20:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.003 00:20:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.003 00:20:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.003 00:20:54 -- paths/export.sh@5 -- # export PATH 00:05:09.003 00:20:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.003 00:20:54 -- nvmf/common.sh@51 -- # : 0 00:05:09.003 00:20:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.003 00:20:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.003 00:20:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.003 00:20:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.003 00:20:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.003 00:20:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.003 00:20:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.003 00:20:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.003 00:20:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.003 00:20:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:09.003 00:20:54 -- spdk/autotest.sh@32 -- # uname -s 00:05:09.003 00:20:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:09.003 00:20:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:09.003 00:20:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:09.003 00:20:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:09.003 00:20:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:09.003 00:20:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:09.003 00:20:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:09.003 00:20:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:09.003 00:20:54 -- spdk/autotest.sh@48 -- # udevadm_pid=66595 00:05:09.003 00:20:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:09.003 00:20:54 -- pm/common@17 -- # local monitor 00:05:09.003 00:20:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.003 00:20:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.003 00:20:54 -- pm/common@25 -- # sleep 1 00:05:09.003 00:20:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:09.003 00:20:54 -- pm/common@21 -- # date +%s 00:05:09.003 00:20:54 -- pm/common@21 -- # date +%s 00:05:09.003 00:20:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734394854 00:05:09.003 00:20:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734394854 00:05:09.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734394854_collect-cpu-load.pm.log 00:05:09.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734394854_collect-vmstat.pm.log 00:05:09.961 00:20:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:09.961 00:20:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:09.961 00:20:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.961 00:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.962 00:20:55 -- spdk/autotest.sh@59 -- # create_test_list 00:05:09.962 00:20:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:09.962 00:20:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.962 00:20:55 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:10.221 00:20:55 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:10.221 00:20:55 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:10.221 00:20:55 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:10.221 00:20:55 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:10.221 00:20:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:10.221 00:20:55 -- common/autotest_common.sh@1455 -- # uname 00:05:10.221 00:20:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:10.221 00:20:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:10.221 00:20:55 -- common/autotest_common.sh@1475 -- # uname 00:05:10.221 00:20:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:10.221 00:20:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:10.221 00:20:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:10.221 lcov: LCOV version 1.15 00:05:10.221 00:20:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:25.102 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:25.102 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:40.019 00:21:25 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:40.019 00:21:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.019 00:21:25 -- common/autotest_common.sh@10 -- # set +x 00:05:40.019 00:21:25 -- spdk/autotest.sh@78 -- # rm -f 00:05:40.019 00:21:25 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.537 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:40.537 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:40.537 00:21:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:40.537 00:21:26 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:40.537 00:21:26 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:40.537 00:21:26 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:40.537 00:21:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.537 00:21:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:40.537 00:21:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:40.537 00:21:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:40.537 00:21:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.537 00:21:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.537 00:21:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:40.537 00:21:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:40.537 00:21:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:40.537 00:21:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.537 00:21:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.537 00:21:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:40.537 00:21:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:40.537 00:21:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:40.537 00:21:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.537 00:21:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.537 00:21:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:40.537 00:21:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:40.537 00:21:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:40.537 00:21:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.537 00:21:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:40.537 00:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.537 00:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.537 00:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:40.537 00:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:40.537 00:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:40.537 No valid GPT data, bailing 00:05:40.537 00:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:40.537 00:21:26 -- scripts/common.sh@394 -- # pt= 00:05:40.537 00:21:26 -- scripts/common.sh@395 -- # return 1 00:05:40.537 00:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:40.537 1+0 records in 00:05:40.537 1+0 records out 00:05:40.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470641 s, 223 MB/s 00:05:40.537 00:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.537 00:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.537 00:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:40.537 00:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:40.537 00:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:40.537 No valid GPT data, bailing 00:05:40.537 00:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:40.537 00:21:26 -- scripts/common.sh@394 -- # pt= 00:05:40.537 00:21:26 -- scripts/common.sh@395 -- # return 1 00:05:40.537 00:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:40.537 1+0 records in 00:05:40.537 1+0 records out 00:05:40.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450018 s, 233 MB/s 00:05:40.537 00:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.537 00:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.537 00:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:40.537 00:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:40.537 00:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:40.796 No valid GPT data, bailing 00:05:40.796 00:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:40.796 00:21:26 -- scripts/common.sh@394 -- # pt= 00:05:40.796 00:21:26 -- scripts/common.sh@395 -- # return 1 00:05:40.796 00:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:40.796 1+0 records in 00:05:40.796 1+0 records out 00:05:40.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483615 s, 217 MB/s 00:05:40.796 00:21:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.797 00:21:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.797 00:21:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:40.797 00:21:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:40.797 00:21:26 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:40.797 No valid GPT data, bailing 00:05:40.797 00:21:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:40.797 00:21:26 -- scripts/common.sh@394 -- # pt= 00:05:40.797 00:21:26 -- scripts/common.sh@395 -- # return 1 00:05:40.797 00:21:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:40.797 1+0 records in 00:05:40.797 1+0 records out 00:05:40.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458169 s, 229 MB/s 00:05:40.797 00:21:26 -- spdk/autotest.sh@105 -- # sync 00:05:40.797 00:21:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:40.797 00:21:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:40.797 00:21:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:42.700 00:21:28 -- spdk/autotest.sh@111 -- # uname -s 00:05:42.700 00:21:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:42.700 00:21:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:42.700 00:21:28 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:43.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.636 Hugepages 00:05:43.636 node hugesize free / total 00:05:43.636 node0 1048576kB 0 / 0 00:05:43.636 node0 2048kB 0 / 0 00:05:43.636 00:05:43.636 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:43.636 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:43.636 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:43.636 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:43.636 00:21:29 -- spdk/autotest.sh@117 -- # uname -s 00:05:43.636 00:21:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:43.636 00:21:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:43.636 00:21:29 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.462 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.462 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.462 00:21:30 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:45.399 00:21:31 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:45.399 00:21:31 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:45.399 00:21:31 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:45.658 00:21:31 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:45.658 00:21:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:45.658 00:21:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:45.658 00:21:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:45.658 00:21:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:45.658 00:21:31 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:45.658 00:21:31 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:45.658 00:21:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:45.658 00:21:31 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.916 Waiting for block devices as requested 00:05:45.916 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.175 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:46.175 00:21:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.175 00:21:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:46.175 00:21:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.175 00:21:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.175 00:21:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.175 00:21:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1541 -- # continue 00:05:46.175 00:21:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.175 00:21:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:46.175 00:21:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:46.175 00:21:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.175 00:21:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:46.175 00:21:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.175 00:21:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.175 00:21:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.175 00:21:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.175 00:21:32 -- common/autotest_common.sh@1541 -- # continue 00:05:46.175 00:21:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:46.175 00:21:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.175 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.175 00:21:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:46.175 00:21:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.175 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.175 00:21:32 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.111 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.111 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.111 00:21:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:47.111 00:21:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.111 00:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.111 00:21:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:47.111 00:21:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:47.111 00:21:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:47.111 00:21:33 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:47.111 00:21:33 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:47.111 00:21:33 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:47.111 00:21:33 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:47.111 00:21:33 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:47.111 00:21:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:47.111 00:21:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:47.111 00:21:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.111 00:21:33 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.111 00:21:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:47.111 00:21:33 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:47.111 00:21:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:47.111 00:21:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.111 00:21:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:47.111 00:21:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.111 00:21:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.111 00:21:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.111 00:21:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:47.111 00:21:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.111 00:21:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.111 00:21:33 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:47.111 00:21:33 -- common/autotest_common.sh@1570 -- # return 0 00:05:47.111 00:21:33 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:47.111 00:21:33 -- common/autotest_common.sh@1578 -- # return 0 00:05:47.111 00:21:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:47.111 00:21:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:47.111 00:21:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.111 00:21:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.111 00:21:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:47.111 00:21:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.111 00:21:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.111 00:21:33 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:47.111 00:21:33 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:47.111 00:21:33 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:47.111 00:21:33 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.111 00:21:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.111 00:21:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.111 00:21:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.371 ************************************ 00:05:47.371 START TEST env 00:05:47.371 ************************************ 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.371 * Looking for test storage... 00:05:47.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.371 00:21:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.371 00:21:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.371 00:21:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.371 00:21:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.371 00:21:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.371 00:21:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.371 00:21:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.371 00:21:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.371 00:21:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.371 00:21:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.371 00:21:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.371 00:21:33 env -- scripts/common.sh@344 -- # case "$op" in 00:05:47.371 00:21:33 env -- scripts/common.sh@345 -- # : 1 00:05:47.371 00:21:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.371 00:21:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.371 00:21:33 env -- scripts/common.sh@365 -- # decimal 1 00:05:47.371 00:21:33 env -- scripts/common.sh@353 -- # local d=1 00:05:47.371 00:21:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.371 00:21:33 env -- scripts/common.sh@355 -- # echo 1 00:05:47.371 00:21:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.371 00:21:33 env -- scripts/common.sh@366 -- # decimal 2 00:05:47.371 00:21:33 env -- scripts/common.sh@353 -- # local d=2 00:05:47.371 00:21:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.371 00:21:33 env -- scripts/common.sh@355 -- # echo 2 00:05:47.371 00:21:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.371 00:21:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.371 00:21:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.371 00:21:33 env -- scripts/common.sh@368 -- # return 0 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.371 --rc genhtml_branch_coverage=1 00:05:47.371 --rc genhtml_function_coverage=1 00:05:47.371 --rc genhtml_legend=1 00:05:47.371 --rc geninfo_all_blocks=1 00:05:47.371 --rc geninfo_unexecuted_blocks=1 00:05:47.371 00:05:47.371 ' 00:05:47.371 00:21:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.371 00:21:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.371 00:21:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.371 ************************************ 00:05:47.371 START TEST env_memory 00:05:47.371 ************************************ 00:05:47.371 00:21:33 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.371 00:05:47.371 00:05:47.371 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.371 http://cunit.sourceforge.net/ 00:05:47.371 00:05:47.371 00:05:47.371 Suite: memory 00:05:47.630 Test: alloc and free memory map ...[2024-12-17 00:21:33.396656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.630 passed 00:05:47.630 Test: mem map translation ...[2024-12-17 00:21:33.427421] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:47.630 [2024-12-17 00:21:33.427470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:47.630 [2024-12-17 00:21:33.427535] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:47.630 [2024-12-17 00:21:33.427545] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:47.630 passed 00:05:47.630 Test: mem map registration ...[2024-12-17 00:21:33.492864] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:47.630 [2024-12-17 00:21:33.492912] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:47.630 passed 00:05:47.630 Test: mem map adjacent registrations ...passed 00:05:47.630 00:05:47.630 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.630 suites 1 1 n/a 0 0 00:05:47.630 tests 4 4 4 0 0 00:05:47.630 asserts 152 152 152 0 n/a 00:05:47.630 00:05:47.630 Elapsed time = 0.214 seconds 00:05:47.630 00:05:47.630 real 0m0.229s 00:05:47.630 user 0m0.217s 00:05:47.630 sys 0m0.008s 00:05:47.630 00:21:33 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.630 00:21:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:47.630 ************************************ 00:05:47.630 END TEST env_memory 00:05:47.630 ************************************ 00:05:47.630 00:21:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.630 00:21:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.630 00:21:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.630 00:21:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.890 ************************************ 00:05:47.890 START TEST env_vtophys 00:05:47.890 ************************************ 00:05:47.890 00:21:33 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.890 EAL: lib.eal log level changed from notice to debug 00:05:47.890 EAL: Detected lcore 0 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 1 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 2 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 3 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 4 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 5 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 6 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 7 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 8 as core 0 on socket 0 00:05:47.890 EAL: Detected lcore 9 as core 0 on socket 0 00:05:47.890 EAL: Maximum logical cores by configuration: 128 00:05:47.890 EAL: Detected CPU lcores: 10 00:05:47.890 EAL: Detected NUMA nodes: 1 00:05:47.890 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:47.890 EAL: Detected shared linkage of DPDK 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:47.890 EAL: Registered [vdev] bus. 00:05:47.890 EAL: bus.vdev log level changed from disabled to notice 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:47.890 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:47.890 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:47.890 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:47.890 EAL: No shared files mode enabled, IPC will be disabled 00:05:47.890 EAL: No shared files mode enabled, IPC is disabled 00:05:47.890 EAL: Selected IOVA mode 'PA' 00:05:47.890 EAL: Probing VFIO support... 00:05:47.890 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:47.890 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:47.890 EAL: Ask a virtual area of 0x2e000 bytes 00:05:47.890 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:47.890 EAL: Setting up physically contiguous memory... 00:05:47.890 EAL: Setting maximum number of open files to 524288 00:05:47.890 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:47.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:47.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.890 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:47.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.890 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:47.890 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:47.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.890 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:47.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.890 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:47.890 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:47.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.890 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:47.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.890 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:47.890 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:47.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.890 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:47.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.890 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:47.890 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:47.890 EAL: Hugepages will be freed exactly as allocated. 00:05:47.890 EAL: No shared files mode enabled, IPC is disabled 00:05:47.890 EAL: No shared files mode enabled, IPC is disabled 00:05:47.890 EAL: TSC frequency is ~2200000 KHz 00:05:47.890 EAL: Main lcore 0 is ready (tid=7f1dae85ca00;cpuset=[0]) 00:05:47.890 EAL: Trying to obtain current memory policy. 00:05:47.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.890 EAL: Restoring previous memory policy: 0 00:05:47.890 EAL: request: mp_malloc_sync 00:05:47.890 EAL: No shared files mode enabled, IPC is disabled 00:05:47.890 EAL: Heap on socket 0 was expanded by 2MB 00:05:47.890 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:47.890 EAL: No shared files mode enabled, IPC is disabled 00:05:47.890 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:47.890 EAL: Mem event callback 'spdk:(nil)' registered 00:05:47.890 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:47.890 00:05:47.890 00:05:47.890 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.891 http://cunit.sourceforge.net/ 00:05:47.891 00:05:47.891 00:05:47.891 Suite: components_suite 00:05:47.891 Test: vtophys_malloc_test ...passed 00:05:47.891 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.891 EAL: Restoring previous memory policy: 4 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was expanded by 4MB 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was shrunk by 4MB 00:05:47.891 EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.891 EAL: Restoring previous memory policy: 4 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was expanded by 6MB 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was shrunk by 6MB 00:05:47.891 EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.891 EAL: Restoring previous memory policy: 4 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was expanded by 10MB 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was shrunk by 10MB 00:05:47.891 EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.891 EAL: Restoring previous memory policy: 4 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was expanded by 18MB 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was shrunk by 18MB 00:05:47.891 EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.891 EAL: Restoring previous memory policy: 4 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was expanded by 34MB 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was shrunk by 34MB 00:05:47.891 EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.891 EAL: Restoring previous memory policy: 4 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was expanded by 66MB 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was shrunk by 66MB 00:05:47.891 EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.891 EAL: Restoring previous memory policy: 4 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was expanded by 130MB 00:05:47.891 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.891 EAL: request: mp_malloc_sync 00:05:47.891 EAL: No shared files mode enabled, IPC is disabled 00:05:47.891 EAL: Heap on socket 0 was shrunk by 130MB 00:05:47.891 EAL: Trying to obtain current memory policy. 00:05:47.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.150 EAL: Restoring previous memory policy: 4 00:05:48.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.150 EAL: request: mp_malloc_sync 00:05:48.150 EAL: No shared files mode enabled, IPC is disabled 00:05:48.150 EAL: Heap on socket 0 was expanded by 258MB 00:05:48.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.150 EAL: request: mp_malloc_sync 00:05:48.150 EAL: No shared files mode enabled, IPC is disabled 00:05:48.150 EAL: Heap on socket 0 was shrunk by 258MB 00:05:48.150 EAL: Trying to obtain current memory policy. 00:05:48.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.150 EAL: Restoring previous memory policy: 4 00:05:48.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.150 EAL: request: mp_malloc_sync 00:05:48.150 EAL: No shared files mode enabled, IPC is disabled 00:05:48.150 EAL: Heap on socket 0 was expanded by 514MB 00:05:48.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.409 EAL: request: mp_malloc_sync 00:05:48.409 EAL: No shared files mode enabled, IPC is disabled 00:05:48.409 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.409 EAL: Trying to obtain current memory policy. 00:05:48.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.409 EAL: Restoring previous memory policy: 4 00:05:48.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.409 EAL: request: mp_malloc_sync 00:05:48.409 EAL: No shared files mode enabled, IPC is disabled 00:05:48.409 EAL: Heap on socket 0 was expanded by 1026MB 00:05:48.669 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.669 passed 00:05:48.669 00:05:48.669 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.669 suites 1 1 n/a 0 0 00:05:48.669 tests 2 2 2 0 0 00:05:48.669 asserts 6100 6100 6100 0 n/a 00:05:48.669 00:05:48.669 Elapsed time = 0.707 seconds 00:05:48.669 EAL: request: mp_malloc_sync 00:05:48.669 EAL: No shared files mode enabled, IPC is disabled 00:05:48.669 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:48.669 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.669 EAL: request: mp_malloc_sync 00:05:48.669 EAL: No shared files mode enabled, IPC is disabled 00:05:48.669 EAL: Heap on socket 0 was shrunk by 2MB 00:05:48.669 EAL: No shared files mode enabled, IPC is disabled 00:05:48.669 EAL: No shared files mode enabled, IPC is disabled 00:05:48.669 EAL: No shared files mode enabled, IPC is disabled 00:05:48.669 00:05:48.669 real 0m0.898s 00:05:48.669 user 0m0.445s 00:05:48.669 sys 0m0.323s 00:05:48.669 00:21:34 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.669 00:21:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:48.669 ************************************ 00:05:48.669 END TEST env_vtophys 00:05:48.669 ************************************ 00:05:48.669 00:21:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:48.669 00:21:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.669 00:21:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.669 00:21:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.669 ************************************ 00:05:48.669 START TEST env_pci 00:05:48.669 ************************************ 00:05:48.669 00:21:34 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:48.669 00:05:48.669 00:05:48.669 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.669 http://cunit.sourceforge.net/ 00:05:48.669 00:05:48.669 00:05:48.669 Suite: pci 00:05:48.669 Test: pci_hook ...[2024-12-17 00:21:34.597326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68818 has claimed it 00:05:48.669 passed 00:05:48.669 00:05:48.669 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.669 suites 1 1 n/a 0 0 00:05:48.669 tests 1 1 1 0 0 00:05:48.669 asserts 25 25 25 0 n/a 00:05:48.669 00:05:48.669 Elapsed time = 0.002 seconds 00:05:48.669 EAL: Cannot find device (10000:00:01.0) 00:05:48.669 EAL: Failed to attach device on primary process 00:05:48.669 00:05:48.669 real 0m0.020s 00:05:48.669 user 0m0.011s 00:05:48.669 sys 0m0.009s 00:05:48.669 00:21:34 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.669 00:21:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:48.669 ************************************ 00:05:48.669 END TEST env_pci 00:05:48.669 ************************************ 00:05:48.669 00:21:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:48.669 00:21:34 env -- env/env.sh@15 -- # uname 00:05:48.669 00:21:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:48.669 00:21:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:48.669 00:21:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.669 00:21:34 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:48.669 00:21:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.669 00:21:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.669 ************************************ 00:05:48.669 START TEST env_dpdk_post_init 00:05:48.669 ************************************ 00:05:48.669 00:21:34 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.928 EAL: Detected CPU lcores: 10 00:05:48.928 EAL: Detected NUMA nodes: 1 00:05:48.928 EAL: Detected shared linkage of DPDK 00:05:48.928 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.928 EAL: Selected IOVA mode 'PA' 00:05:48.928 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:48.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:48.928 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:48.928 Starting DPDK initialization... 00:05:48.928 Starting SPDK post initialization... 00:05:48.928 SPDK NVMe probe 00:05:48.928 Attaching to 0000:00:10.0 00:05:48.928 Attaching to 0000:00:11.0 00:05:48.928 Attached to 0000:00:10.0 00:05:48.928 Attached to 0000:00:11.0 00:05:48.928 Cleaning up... 00:05:48.928 00:05:48.928 real 0m0.166s 00:05:48.928 user 0m0.036s 00:05:48.928 sys 0m0.030s 00:05:48.928 00:21:34 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.928 00:21:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.928 ************************************ 00:05:48.928 END TEST env_dpdk_post_init 00:05:48.928 ************************************ 00:05:48.928 00:21:34 env -- env/env.sh@26 -- # uname 00:05:48.928 00:21:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:48.928 00:21:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.928 00:21:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.928 00:21:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.928 00:21:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.928 ************************************ 00:05:48.928 START TEST env_mem_callbacks 00:05:48.928 ************************************ 00:05:48.928 00:21:34 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.928 EAL: Detected CPU lcores: 10 00:05:48.928 EAL: Detected NUMA nodes: 1 00:05:48.928 EAL: Detected shared linkage of DPDK 00:05:48.928 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.928 EAL: Selected IOVA mode 'PA' 00:05:49.187 00:05:49.187 00:05:49.187 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.187 http://cunit.sourceforge.net/ 00:05:49.187 00:05:49.187 00:05:49.187 Suite: memory 00:05:49.187 Test: test ... 00:05:49.187 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.187 register 0x200000200000 2097152 00:05:49.187 malloc 3145728 00:05:49.187 register 0x200000400000 4194304 00:05:49.187 buf 0x200000500000 len 3145728 PASSED 00:05:49.187 malloc 64 00:05:49.187 buf 0x2000004fff40 len 64 PASSED 00:05:49.188 malloc 4194304 00:05:49.188 register 0x200000800000 6291456 00:05:49.188 buf 0x200000a00000 len 4194304 PASSED 00:05:49.188 free 0x200000500000 3145728 00:05:49.188 free 0x2000004fff40 64 00:05:49.188 unregister 0x200000400000 4194304 PASSED 00:05:49.188 free 0x200000a00000 4194304 00:05:49.188 unregister 0x200000800000 6291456 PASSED 00:05:49.188 malloc 8388608 00:05:49.188 register 0x200000400000 10485760 00:05:49.188 buf 0x200000600000 len 8388608 PASSED 00:05:49.188 free 0x200000600000 8388608 00:05:49.188 unregister 0x200000400000 10485760 PASSED 00:05:49.188 passed 00:05:49.188 00:05:49.188 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.188 suites 1 1 n/a 0 0 00:05:49.188 tests 1 1 1 0 0 00:05:49.188 asserts 15 15 15 0 n/a 00:05:49.188 00:05:49.188 Elapsed time = 0.006 seconds 00:05:49.188 00:05:49.188 real 0m0.138s 00:05:49.188 user 0m0.017s 00:05:49.188 sys 0m0.019s 00:05:49.188 00:21:35 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.188 00:21:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:49.188 ************************************ 00:05:49.188 END TEST env_mem_callbacks 00:05:49.188 ************************************ 00:05:49.188 00:05:49.188 real 0m1.942s 00:05:49.188 user 0m0.949s 00:05:49.188 sys 0m0.643s 00:05:49.188 00:21:35 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.188 00:21:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.188 ************************************ 00:05:49.188 END TEST env 00:05:49.188 ************************************ 00:05:49.188 00:21:35 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:49.188 00:21:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.188 00:21:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.188 00:21:35 -- common/autotest_common.sh@10 -- # set +x 00:05:49.188 ************************************ 00:05:49.188 START TEST rpc 00:05:49.188 ************************************ 00:05:49.188 00:21:35 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:49.188 * Looking for test storage... 00:05:49.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.447 00:21:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.447 00:21:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.447 00:21:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.447 00:21:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.447 00:21:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.447 00:21:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.447 00:21:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.447 00:21:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.447 00:21:35 rpc -- scripts/common.sh@345 -- # : 1 00:05:49.447 00:21:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.447 00:21:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.447 00:21:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.447 00:21:35 rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.447 00:21:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.447 00:21:35 rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.447 00:21:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.447 00:21:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.447 00:21:35 rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.447 00:21:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.447 00:21:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.447 00:21:35 rpc -- scripts/common.sh@368 -- # return 0 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.447 --rc genhtml_branch_coverage=1 00:05:49.447 --rc genhtml_function_coverage=1 00:05:49.447 --rc genhtml_legend=1 00:05:49.447 --rc geninfo_all_blocks=1 00:05:49.447 --rc geninfo_unexecuted_blocks=1 00:05:49.447 00:05:49.447 ' 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.447 --rc genhtml_branch_coverage=1 00:05:49.447 --rc genhtml_function_coverage=1 00:05:49.447 --rc genhtml_legend=1 00:05:49.447 --rc geninfo_all_blocks=1 00:05:49.447 --rc geninfo_unexecuted_blocks=1 00:05:49.447 00:05:49.447 ' 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.447 --rc genhtml_branch_coverage=1 00:05:49.447 --rc genhtml_function_coverage=1 00:05:49.447 --rc genhtml_legend=1 00:05:49.447 --rc geninfo_all_blocks=1 00:05:49.447 --rc geninfo_unexecuted_blocks=1 00:05:49.447 00:05:49.447 ' 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.447 --rc genhtml_branch_coverage=1 00:05:49.447 --rc genhtml_function_coverage=1 00:05:49.447 --rc genhtml_legend=1 00:05:49.447 --rc geninfo_all_blocks=1 00:05:49.447 --rc geninfo_unexecuted_blocks=1 00:05:49.447 00:05:49.447 ' 00:05:49.447 00:21:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68941 00:05:49.447 00:21:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.447 00:21:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68941 00:05:49.447 00:21:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@831 -- # '[' -z 68941 ']' 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.447 00:21:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.447 [2024-12-17 00:21:35.371190] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:49.447 [2024-12-17 00:21:35.371333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68941 ] 00:05:49.706 [2024-12-17 00:21:35.509224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.706 [2024-12-17 00:21:35.553623] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:49.706 [2024-12-17 00:21:35.553690] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68941' to capture a snapshot of events at runtime. 00:05:49.706 [2024-12-17 00:21:35.553704] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:49.706 [2024-12-17 00:21:35.553714] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:49.706 [2024-12-17 00:21:35.553723] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68941 for offline analysis/debug. 00:05:49.706 [2024-12-17 00:21:35.553755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.706 [2024-12-17 00:21:35.595247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.965 00:21:35 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.965 00:21:35 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.965 00:21:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.965 00:21:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.965 00:21:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:49.965 00:21:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:49.965 00:21:35 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.965 00:21:35 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.965 00:21:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.965 ************************************ 00:05:49.965 START TEST rpc_integrity 00:05:49.965 ************************************ 00:05:49.965 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:49.965 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.965 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.965 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.965 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.965 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.965 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.965 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.966 { 00:05:49.966 "name": "Malloc0", 00:05:49.966 "aliases": [ 00:05:49.966 "8353f140-f79d-4e6b-8cb3-8599c3072167" 00:05:49.966 ], 00:05:49.966 "product_name": "Malloc disk", 00:05:49.966 "block_size": 512, 00:05:49.966 "num_blocks": 16384, 00:05:49.966 "uuid": "8353f140-f79d-4e6b-8cb3-8599c3072167", 00:05:49.966 "assigned_rate_limits": { 00:05:49.966 "rw_ios_per_sec": 0, 00:05:49.966 "rw_mbytes_per_sec": 0, 00:05:49.966 "r_mbytes_per_sec": 0, 00:05:49.966 "w_mbytes_per_sec": 0 00:05:49.966 }, 00:05:49.966 "claimed": false, 00:05:49.966 "zoned": false, 00:05:49.966 "supported_io_types": { 00:05:49.966 "read": true, 00:05:49.966 "write": true, 00:05:49.966 "unmap": true, 00:05:49.966 "flush": true, 00:05:49.966 "reset": true, 00:05:49.966 "nvme_admin": false, 00:05:49.966 "nvme_io": false, 00:05:49.966 "nvme_io_md": false, 00:05:49.966 "write_zeroes": true, 00:05:49.966 "zcopy": true, 00:05:49.966 "get_zone_info": false, 00:05:49.966 "zone_management": false, 00:05:49.966 "zone_append": false, 00:05:49.966 "compare": false, 00:05:49.966 "compare_and_write": false, 00:05:49.966 "abort": true, 00:05:49.966 "seek_hole": false, 00:05:49.966 "seek_data": false, 00:05:49.966 "copy": true, 00:05:49.966 "nvme_iov_md": false 00:05:49.966 }, 00:05:49.966 "memory_domains": [ 00:05:49.966 { 00:05:49.966 "dma_device_id": "system", 00:05:49.966 "dma_device_type": 1 00:05:49.966 }, 00:05:49.966 { 00:05:49.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.966 "dma_device_type": 2 00:05:49.966 } 00:05:49.966 ], 00:05:49.966 "driver_specific": {} 00:05:49.966 } 00:05:49.966 ]' 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 [2024-12-17 00:21:35.886522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:49.966 [2024-12-17 00:21:35.886578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.966 [2024-12-17 00:21:35.886601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf87500 00:05:49.966 [2024-12-17 00:21:35.886611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.966 [2024-12-17 00:21:35.888036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.966 [2024-12-17 00:21:35.888083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.966 Passthru0 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.966 00:21:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.966 { 00:05:49.966 "name": "Malloc0", 00:05:49.966 "aliases": [ 00:05:49.966 "8353f140-f79d-4e6b-8cb3-8599c3072167" 00:05:49.966 ], 00:05:49.966 "product_name": "Malloc disk", 00:05:49.966 "block_size": 512, 00:05:49.966 "num_blocks": 16384, 00:05:49.966 "uuid": "8353f140-f79d-4e6b-8cb3-8599c3072167", 00:05:49.966 "assigned_rate_limits": { 00:05:49.966 "rw_ios_per_sec": 0, 00:05:49.966 "rw_mbytes_per_sec": 0, 00:05:49.966 "r_mbytes_per_sec": 0, 00:05:49.966 "w_mbytes_per_sec": 0 00:05:49.966 }, 00:05:49.966 "claimed": true, 00:05:49.966 "claim_type": "exclusive_write", 00:05:49.966 "zoned": false, 00:05:49.966 "supported_io_types": { 00:05:49.966 "read": true, 00:05:49.966 "write": true, 00:05:49.966 "unmap": true, 00:05:49.966 "flush": true, 00:05:49.966 "reset": true, 00:05:49.966 "nvme_admin": false, 00:05:49.966 "nvme_io": false, 00:05:49.966 "nvme_io_md": false, 00:05:49.966 "write_zeroes": true, 00:05:49.966 "zcopy": true, 00:05:49.966 "get_zone_info": false, 00:05:49.966 "zone_management": false, 00:05:49.966 "zone_append": false, 00:05:49.966 "compare": false, 00:05:49.966 "compare_and_write": false, 00:05:49.966 "abort": true, 00:05:49.966 "seek_hole": false, 00:05:49.966 "seek_data": false, 00:05:49.966 "copy": true, 00:05:49.966 "nvme_iov_md": false 00:05:49.966 }, 00:05:49.966 "memory_domains": [ 00:05:49.966 { 00:05:49.966 "dma_device_id": "system", 00:05:49.966 "dma_device_type": 1 00:05:49.966 }, 00:05:49.966 { 00:05:49.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.966 "dma_device_type": 2 00:05:49.966 } 00:05:49.966 ], 00:05:49.966 "driver_specific": {} 00:05:49.966 }, 00:05:49.966 { 00:05:49.966 "name": "Passthru0", 00:05:49.966 "aliases": [ 00:05:49.966 "494efafe-b4d7-5089-85e8-6d5e742c6881" 00:05:49.966 ], 00:05:49.966 "product_name": "passthru", 00:05:49.966 "block_size": 512, 00:05:49.966 "num_blocks": 16384, 00:05:49.966 "uuid": "494efafe-b4d7-5089-85e8-6d5e742c6881", 00:05:49.966 "assigned_rate_limits": { 00:05:49.966 "rw_ios_per_sec": 0, 00:05:49.966 "rw_mbytes_per_sec": 0, 00:05:49.966 "r_mbytes_per_sec": 0, 00:05:49.966 "w_mbytes_per_sec": 0 00:05:49.966 }, 00:05:49.966 "claimed": false, 00:05:49.966 "zoned": false, 00:05:49.966 "supported_io_types": { 00:05:49.966 "read": true, 00:05:49.966 "write": true, 00:05:49.966 "unmap": true, 00:05:49.966 "flush": true, 00:05:49.966 "reset": true, 00:05:49.966 "nvme_admin": false, 00:05:49.966 "nvme_io": false, 00:05:49.966 "nvme_io_md": false, 00:05:49.966 "write_zeroes": true, 00:05:49.966 "zcopy": true, 00:05:49.966 "get_zone_info": false, 00:05:49.966 "zone_management": false, 00:05:49.966 "zone_append": false, 00:05:49.966 "compare": false, 00:05:49.966 "compare_and_write": false, 00:05:49.966 "abort": true, 00:05:49.966 "seek_hole": false, 00:05:49.966 "seek_data": false, 00:05:49.966 "copy": true, 00:05:49.966 "nvme_iov_md": false 00:05:49.966 }, 00:05:49.966 "memory_domains": [ 00:05:49.966 { 00:05:49.966 "dma_device_id": "system", 00:05:49.966 "dma_device_type": 1 00:05:49.966 }, 00:05:49.966 { 00:05:49.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.966 "dma_device_type": 2 00:05:49.966 } 00:05:49.966 ], 00:05:49.966 "driver_specific": { 00:05:49.966 "passthru": { 00:05:49.966 "name": "Passthru0", 00:05:49.966 "base_bdev_name": "Malloc0" 00:05:49.966 } 00:05:49.966 } 00:05:49.966 } 00:05:49.966 ]' 00:05:49.966 00:21:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:50.225 00:21:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:50.225 00:21:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.225 00:21:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.225 00:21:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.225 00:21:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:50.225 00:21:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.225 00:21:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.225 00:05:50.225 real 0m0.348s 00:05:50.225 user 0m0.243s 00:05:50.225 sys 0m0.042s 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.225 00:21:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.225 ************************************ 00:05:50.225 END TEST rpc_integrity 00:05:50.225 ************************************ 00:05:50.225 00:21:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:50.225 00:21:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.225 00:21:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.225 00:21:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.225 ************************************ 00:05:50.225 START TEST rpc_plugins 00:05:50.225 ************************************ 00:05:50.225 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:50.225 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:50.225 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.225 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.225 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.225 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:50.225 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:50.225 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.226 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.226 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.226 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:50.226 { 00:05:50.226 "name": "Malloc1", 00:05:50.226 "aliases": [ 00:05:50.226 "5fff8d18-b741-4d58-82b5-d93c8e913b7c" 00:05:50.226 ], 00:05:50.226 "product_name": "Malloc disk", 00:05:50.226 "block_size": 4096, 00:05:50.226 "num_blocks": 256, 00:05:50.226 "uuid": "5fff8d18-b741-4d58-82b5-d93c8e913b7c", 00:05:50.226 "assigned_rate_limits": { 00:05:50.226 "rw_ios_per_sec": 0, 00:05:50.226 "rw_mbytes_per_sec": 0, 00:05:50.226 "r_mbytes_per_sec": 0, 00:05:50.226 "w_mbytes_per_sec": 0 00:05:50.226 }, 00:05:50.226 "claimed": false, 00:05:50.226 "zoned": false, 00:05:50.226 "supported_io_types": { 00:05:50.226 "read": true, 00:05:50.226 "write": true, 00:05:50.226 "unmap": true, 00:05:50.226 "flush": true, 00:05:50.226 "reset": true, 00:05:50.226 "nvme_admin": false, 00:05:50.226 "nvme_io": false, 00:05:50.226 "nvme_io_md": false, 00:05:50.226 "write_zeroes": true, 00:05:50.226 "zcopy": true, 00:05:50.226 "get_zone_info": false, 00:05:50.226 "zone_management": false, 00:05:50.226 "zone_append": false, 00:05:50.226 "compare": false, 00:05:50.226 "compare_and_write": false, 00:05:50.226 "abort": true, 00:05:50.226 "seek_hole": false, 00:05:50.226 "seek_data": false, 00:05:50.226 "copy": true, 00:05:50.226 "nvme_iov_md": false 00:05:50.226 }, 00:05:50.226 "memory_domains": [ 00:05:50.226 { 00:05:50.226 "dma_device_id": "system", 00:05:50.226 "dma_device_type": 1 00:05:50.226 }, 00:05:50.226 { 00:05:50.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.226 "dma_device_type": 2 00:05:50.226 } 00:05:50.226 ], 00:05:50.226 "driver_specific": {} 00:05:50.226 } 00:05:50.226 ]' 00:05:50.226 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:50.484 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:50.484 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.484 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.484 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:50.484 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:50.484 00:21:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:50.484 00:05:50.484 real 0m0.155s 00:05:50.484 user 0m0.104s 00:05:50.484 sys 0m0.018s 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.484 ************************************ 00:05:50.484 END TEST rpc_plugins 00:05:50.484 00:21:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.484 ************************************ 00:05:50.484 00:21:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:50.484 00:21:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.484 00:21:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.484 00:21:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.484 ************************************ 00:05:50.484 START TEST rpc_trace_cmd_test 00:05:50.484 ************************************ 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:50.484 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68941", 00:05:50.484 "tpoint_group_mask": "0x8", 00:05:50.484 "iscsi_conn": { 00:05:50.484 "mask": "0x2", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "scsi": { 00:05:50.484 "mask": "0x4", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "bdev": { 00:05:50.484 "mask": "0x8", 00:05:50.484 "tpoint_mask": "0xffffffffffffffff" 00:05:50.484 }, 00:05:50.484 "nvmf_rdma": { 00:05:50.484 "mask": "0x10", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "nvmf_tcp": { 00:05:50.484 "mask": "0x20", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "ftl": { 00:05:50.484 "mask": "0x40", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "blobfs": { 00:05:50.484 "mask": "0x80", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "dsa": { 00:05:50.484 "mask": "0x200", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "thread": { 00:05:50.484 "mask": "0x400", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "nvme_pcie": { 00:05:50.484 "mask": "0x800", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "iaa": { 00:05:50.484 "mask": "0x1000", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "nvme_tcp": { 00:05:50.484 "mask": "0x2000", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "bdev_nvme": { 00:05:50.484 "mask": "0x4000", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "sock": { 00:05:50.484 "mask": "0x8000", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "blob": { 00:05:50.484 "mask": "0x10000", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 }, 00:05:50.484 "bdev_raid": { 00:05:50.484 "mask": "0x20000", 00:05:50.484 "tpoint_mask": "0x0" 00:05:50.484 } 00:05:50.484 }' 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:50.484 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:50.743 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:50.743 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:50.743 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:50.743 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:50.743 00:21:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:50.743 00:05:50.743 real 0m0.270s 00:05:50.743 user 0m0.232s 00:05:50.743 sys 0m0.026s 00:05:50.743 00:21:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.743 ************************************ 00:05:50.743 END TEST rpc_trace_cmd_test 00:05:50.743 00:21:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.743 ************************************ 00:05:50.743 00:21:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:50.743 00:21:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:50.743 00:21:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:50.743 00:21:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.743 00:21:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.743 00:21:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.743 ************************************ 00:05:50.743 START TEST rpc_daemon_integrity 00:05:50.743 ************************************ 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.743 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.002 { 00:05:51.002 "name": "Malloc2", 00:05:51.002 "aliases": [ 00:05:51.002 "501753f2-e0a6-4eb4-a446-64737ef93f20" 00:05:51.002 ], 00:05:51.002 "product_name": "Malloc disk", 00:05:51.002 "block_size": 512, 00:05:51.002 "num_blocks": 16384, 00:05:51.002 "uuid": "501753f2-e0a6-4eb4-a446-64737ef93f20", 00:05:51.002 "assigned_rate_limits": { 00:05:51.002 "rw_ios_per_sec": 0, 00:05:51.002 "rw_mbytes_per_sec": 0, 00:05:51.002 "r_mbytes_per_sec": 0, 00:05:51.002 "w_mbytes_per_sec": 0 00:05:51.002 }, 00:05:51.002 "claimed": false, 00:05:51.002 "zoned": false, 00:05:51.002 "supported_io_types": { 00:05:51.002 "read": true, 00:05:51.002 "write": true, 00:05:51.002 "unmap": true, 00:05:51.002 "flush": true, 00:05:51.002 "reset": true, 00:05:51.002 "nvme_admin": false, 00:05:51.002 "nvme_io": false, 00:05:51.002 "nvme_io_md": false, 00:05:51.002 "write_zeroes": true, 00:05:51.002 "zcopy": true, 00:05:51.002 "get_zone_info": false, 00:05:51.002 "zone_management": false, 00:05:51.002 "zone_append": false, 00:05:51.002 "compare": false, 00:05:51.002 "compare_and_write": false, 00:05:51.002 "abort": true, 00:05:51.002 "seek_hole": false, 00:05:51.002 "seek_data": false, 00:05:51.002 "copy": true, 00:05:51.002 "nvme_iov_md": false 00:05:51.002 }, 00:05:51.002 "memory_domains": [ 00:05:51.002 { 00:05:51.002 "dma_device_id": "system", 00:05:51.002 "dma_device_type": 1 00:05:51.002 }, 00:05:51.002 { 00:05:51.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.002 "dma_device_type": 2 00:05:51.002 } 00:05:51.002 ], 00:05:51.002 "driver_specific": {} 00:05:51.002 } 00:05:51.002 ]' 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.002 [2024-12-17 00:21:36.812053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.002 [2024-12-17 00:21:36.812129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.002 [2024-12-17 00:21:36.812147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xed44b0 00:05:51.002 [2024-12-17 00:21:36.812155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.002 [2024-12-17 00:21:36.813584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.002 [2024-12-17 00:21:36.813636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.002 Passthru0 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.002 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.002 { 00:05:51.002 "name": "Malloc2", 00:05:51.002 "aliases": [ 00:05:51.002 "501753f2-e0a6-4eb4-a446-64737ef93f20" 00:05:51.002 ], 00:05:51.002 "product_name": "Malloc disk", 00:05:51.002 "block_size": 512, 00:05:51.002 "num_blocks": 16384, 00:05:51.002 "uuid": "501753f2-e0a6-4eb4-a446-64737ef93f20", 00:05:51.002 "assigned_rate_limits": { 00:05:51.002 "rw_ios_per_sec": 0, 00:05:51.002 "rw_mbytes_per_sec": 0, 00:05:51.002 "r_mbytes_per_sec": 0, 00:05:51.002 "w_mbytes_per_sec": 0 00:05:51.002 }, 00:05:51.002 "claimed": true, 00:05:51.002 "claim_type": "exclusive_write", 00:05:51.002 "zoned": false, 00:05:51.002 "supported_io_types": { 00:05:51.002 "read": true, 00:05:51.002 "write": true, 00:05:51.002 "unmap": true, 00:05:51.002 "flush": true, 00:05:51.002 "reset": true, 00:05:51.002 "nvme_admin": false, 00:05:51.002 "nvme_io": false, 00:05:51.002 "nvme_io_md": false, 00:05:51.002 "write_zeroes": true, 00:05:51.002 "zcopy": true, 00:05:51.002 "get_zone_info": false, 00:05:51.002 "zone_management": false, 00:05:51.002 "zone_append": false, 00:05:51.002 "compare": false, 00:05:51.002 "compare_and_write": false, 00:05:51.002 "abort": true, 00:05:51.002 "seek_hole": false, 00:05:51.002 "seek_data": false, 00:05:51.002 "copy": true, 00:05:51.002 "nvme_iov_md": false 00:05:51.002 }, 00:05:51.002 "memory_domains": [ 00:05:51.002 { 00:05:51.002 "dma_device_id": "system", 00:05:51.002 "dma_device_type": 1 00:05:51.002 }, 00:05:51.002 { 00:05:51.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.002 "dma_device_type": 2 00:05:51.002 } 00:05:51.002 ], 00:05:51.002 "driver_specific": {} 00:05:51.002 }, 00:05:51.002 { 00:05:51.002 "name": "Passthru0", 00:05:51.002 "aliases": [ 00:05:51.002 "0265dfad-ac8b-5d07-be8f-cabea950fe05" 00:05:51.002 ], 00:05:51.002 "product_name": "passthru", 00:05:51.002 "block_size": 512, 00:05:51.002 "num_blocks": 16384, 00:05:51.002 "uuid": "0265dfad-ac8b-5d07-be8f-cabea950fe05", 00:05:51.002 "assigned_rate_limits": { 00:05:51.002 "rw_ios_per_sec": 0, 00:05:51.002 "rw_mbytes_per_sec": 0, 00:05:51.002 "r_mbytes_per_sec": 0, 00:05:51.002 "w_mbytes_per_sec": 0 00:05:51.002 }, 00:05:51.002 "claimed": false, 00:05:51.002 "zoned": false, 00:05:51.002 "supported_io_types": { 00:05:51.002 "read": true, 00:05:51.002 "write": true, 00:05:51.002 "unmap": true, 00:05:51.002 "flush": true, 00:05:51.002 "reset": true, 00:05:51.002 "nvme_admin": false, 00:05:51.002 "nvme_io": false, 00:05:51.002 "nvme_io_md": false, 00:05:51.002 "write_zeroes": true, 00:05:51.002 "zcopy": true, 00:05:51.002 "get_zone_info": false, 00:05:51.002 "zone_management": false, 00:05:51.002 "zone_append": false, 00:05:51.002 "compare": false, 00:05:51.002 "compare_and_write": false, 00:05:51.002 "abort": true, 00:05:51.002 "seek_hole": false, 00:05:51.002 "seek_data": false, 00:05:51.002 "copy": true, 00:05:51.002 "nvme_iov_md": false 00:05:51.002 }, 00:05:51.002 "memory_domains": [ 00:05:51.002 { 00:05:51.002 "dma_device_id": "system", 00:05:51.002 "dma_device_type": 1 00:05:51.002 }, 00:05:51.002 { 00:05:51.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.002 "dma_device_type": 2 00:05:51.002 } 00:05:51.002 ], 00:05:51.003 "driver_specific": { 00:05:51.003 "passthru": { 00:05:51.003 "name": "Passthru0", 00:05:51.003 "base_bdev_name": "Malloc2" 00:05:51.003 } 00:05:51.003 } 00:05:51.003 } 00:05:51.003 ]' 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.003 00:05:51.003 real 0m0.312s 00:05:51.003 user 0m0.210s 00:05:51.003 sys 0m0.044s 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.003 ************************************ 00:05:51.003 00:21:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.003 END TEST rpc_daemon_integrity 00:05:51.003 ************************************ 00:05:51.262 00:21:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:51.262 00:21:37 rpc -- rpc/rpc.sh@84 -- # killprocess 68941 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@950 -- # '[' -z 68941 ']' 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@954 -- # kill -0 68941 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68941 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.262 killing process with pid 68941 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68941' 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@969 -- # kill 68941 00:05:51.262 00:21:37 rpc -- common/autotest_common.sh@974 -- # wait 68941 00:05:51.521 00:05:51.521 real 0m2.160s 00:05:51.521 user 0m2.913s 00:05:51.521 sys 0m0.583s 00:05:51.521 00:21:37 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.521 00:21:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 ************************************ 00:05:51.521 END TEST rpc 00:05:51.521 ************************************ 00:05:51.521 00:21:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:51.521 00:21:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.521 00:21:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.521 00:21:37 -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 ************************************ 00:05:51.521 START TEST skip_rpc 00:05:51.521 ************************************ 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:51.521 * Looking for test storage... 00:05:51.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.521 00:21:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.521 --rc genhtml_branch_coverage=1 00:05:51.521 --rc genhtml_function_coverage=1 00:05:51.521 --rc genhtml_legend=1 00:05:51.521 --rc geninfo_all_blocks=1 00:05:51.521 --rc geninfo_unexecuted_blocks=1 00:05:51.521 00:05:51.521 ' 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.521 --rc genhtml_branch_coverage=1 00:05:51.521 --rc genhtml_function_coverage=1 00:05:51.521 --rc genhtml_legend=1 00:05:51.521 --rc geninfo_all_blocks=1 00:05:51.521 --rc geninfo_unexecuted_blocks=1 00:05:51.521 00:05:51.521 ' 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.521 --rc genhtml_branch_coverage=1 00:05:51.521 --rc genhtml_function_coverage=1 00:05:51.521 --rc genhtml_legend=1 00:05:51.521 --rc geninfo_all_blocks=1 00:05:51.521 --rc geninfo_unexecuted_blocks=1 00:05:51.521 00:05:51.521 ' 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.521 --rc genhtml_branch_coverage=1 00:05:51.521 --rc genhtml_function_coverage=1 00:05:51.521 --rc genhtml_legend=1 00:05:51.521 --rc geninfo_all_blocks=1 00:05:51.521 --rc geninfo_unexecuted_blocks=1 00:05:51.521 00:05:51.521 ' 00:05:51.521 00:21:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.521 00:21:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:51.521 00:21:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.521 00:21:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 ************************************ 00:05:51.521 START TEST skip_rpc 00:05:51.521 ************************************ 00:05:51.521 00:21:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:51.521 00:21:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69134 00:05:51.521 00:21:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.521 00:21:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:51.521 00:21:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:51.781 [2024-12-17 00:21:37.578696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:51.781 [2024-12-17 00:21:37.578823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69134 ] 00:05:51.781 [2024-12-17 00:21:37.717846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.781 [2024-12-17 00:21:37.754569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.040 [2024-12-17 00:21:37.788994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69134 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69134 ']' 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69134 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69134 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.347 killing process with pid 69134 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69134' 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69134 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69134 00:05:57.347 00:05:57.347 real 0m5.270s 00:05:57.347 user 0m5.003s 00:05:57.347 sys 0m0.186s 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.347 00:21:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.347 ************************************ 00:05:57.347 END TEST skip_rpc 00:05:57.347 ************************************ 00:05:57.347 00:21:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:57.347 00:21:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.347 00:21:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.347 00:21:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.347 ************************************ 00:05:57.347 START TEST skip_rpc_with_json 00:05:57.347 ************************************ 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69215 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69215 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69215 ']' 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.347 00:21:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.347 [2024-12-17 00:21:42.900532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:57.347 [2024-12-17 00:21:42.900659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69215 ] 00:05:57.347 [2024-12-17 00:21:43.031811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.347 [2024-12-17 00:21:43.066009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.347 [2024-12-17 00:21:43.105747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.915 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.916 [2024-12-17 00:21:43.874049] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:57.916 request: 00:05:57.916 { 00:05:57.916 "trtype": "tcp", 00:05:57.916 "method": "nvmf_get_transports", 00:05:57.916 "req_id": 1 00:05:57.916 } 00:05:57.916 Got JSON-RPC error response 00:05:57.916 response: 00:05:57.916 { 00:05:57.916 "code": -19, 00:05:57.916 "message": "No such device" 00:05:57.916 } 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.916 [2024-12-17 00:21:43.886160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.916 00:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.175 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.175 00:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.175 { 00:05:58.175 "subsystems": [ 00:05:58.175 { 00:05:58.175 "subsystem": "fsdev", 00:05:58.175 "config": [ 00:05:58.175 { 00:05:58.175 "method": "fsdev_set_opts", 00:05:58.175 "params": { 00:05:58.175 "fsdev_io_pool_size": 65535, 00:05:58.175 "fsdev_io_cache_size": 256 00:05:58.175 } 00:05:58.175 } 00:05:58.175 ] 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "subsystem": "keyring", 00:05:58.175 "config": [] 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "subsystem": "iobuf", 00:05:58.175 "config": [ 00:05:58.175 { 00:05:58.175 "method": "iobuf_set_options", 00:05:58.175 "params": { 00:05:58.175 "small_pool_count": 8192, 00:05:58.175 "large_pool_count": 1024, 00:05:58.175 "small_bufsize": 8192, 00:05:58.175 "large_bufsize": 135168 00:05:58.175 } 00:05:58.175 } 00:05:58.175 ] 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "subsystem": "sock", 00:05:58.175 "config": [ 00:05:58.175 { 00:05:58.175 "method": "sock_set_default_impl", 00:05:58.175 "params": { 00:05:58.175 "impl_name": "uring" 00:05:58.175 } 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "method": "sock_impl_set_options", 00:05:58.175 "params": { 00:05:58.175 "impl_name": "ssl", 00:05:58.175 "recv_buf_size": 4096, 00:05:58.175 "send_buf_size": 4096, 00:05:58.175 "enable_recv_pipe": true, 00:05:58.175 "enable_quickack": false, 00:05:58.175 "enable_placement_id": 0, 00:05:58.175 "enable_zerocopy_send_server": true, 00:05:58.175 "enable_zerocopy_send_client": false, 00:05:58.175 "zerocopy_threshold": 0, 00:05:58.175 "tls_version": 0, 00:05:58.175 "enable_ktls": false 00:05:58.175 } 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "method": "sock_impl_set_options", 00:05:58.175 "params": { 00:05:58.175 "impl_name": "posix", 00:05:58.175 "recv_buf_size": 2097152, 00:05:58.175 "send_buf_size": 2097152, 00:05:58.175 "enable_recv_pipe": true, 00:05:58.175 "enable_quickack": false, 00:05:58.175 "enable_placement_id": 0, 00:05:58.175 "enable_zerocopy_send_server": true, 00:05:58.175 "enable_zerocopy_send_client": false, 00:05:58.175 "zerocopy_threshold": 0, 00:05:58.175 "tls_version": 0, 00:05:58.175 "enable_ktls": false 00:05:58.175 } 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "method": "sock_impl_set_options", 00:05:58.175 "params": { 00:05:58.175 "impl_name": "uring", 00:05:58.175 "recv_buf_size": 2097152, 00:05:58.175 "send_buf_size": 2097152, 00:05:58.175 "enable_recv_pipe": true, 00:05:58.175 "enable_quickack": false, 00:05:58.175 "enable_placement_id": 0, 00:05:58.175 "enable_zerocopy_send_server": false, 00:05:58.175 "enable_zerocopy_send_client": false, 00:05:58.175 "zerocopy_threshold": 0, 00:05:58.175 "tls_version": 0, 00:05:58.175 "enable_ktls": false 00:05:58.175 } 00:05:58.175 } 00:05:58.175 ] 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "subsystem": "vmd", 00:05:58.175 "config": [] 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "subsystem": "accel", 00:05:58.175 "config": [ 00:05:58.175 { 00:05:58.175 "method": "accel_set_options", 00:05:58.175 "params": { 00:05:58.175 "small_cache_size": 128, 00:05:58.175 "large_cache_size": 16, 00:05:58.175 "task_count": 2048, 00:05:58.175 "sequence_count": 2048, 00:05:58.175 "buf_count": 2048 00:05:58.175 } 00:05:58.175 } 00:05:58.175 ] 00:05:58.175 }, 00:05:58.175 { 00:05:58.175 "subsystem": "bdev", 00:05:58.175 "config": [ 00:05:58.175 { 00:05:58.175 "method": "bdev_set_options", 00:05:58.175 "params": { 00:05:58.175 "bdev_io_pool_size": 65535, 00:05:58.175 "bdev_io_cache_size": 256, 00:05:58.176 "bdev_auto_examine": true, 00:05:58.176 "iobuf_small_cache_size": 128, 00:05:58.176 "iobuf_large_cache_size": 16 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "bdev_raid_set_options", 00:05:58.176 "params": { 00:05:58.176 "process_window_size_kb": 1024, 00:05:58.176 "process_max_bandwidth_mb_sec": 0 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "bdev_iscsi_set_options", 00:05:58.176 "params": { 00:05:58.176 "timeout_sec": 30 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "bdev_nvme_set_options", 00:05:58.176 "params": { 00:05:58.176 "action_on_timeout": "none", 00:05:58.176 "timeout_us": 0, 00:05:58.176 "timeout_admin_us": 0, 00:05:58.176 "keep_alive_timeout_ms": 10000, 00:05:58.176 "arbitration_burst": 0, 00:05:58.176 "low_priority_weight": 0, 00:05:58.176 "medium_priority_weight": 0, 00:05:58.176 "high_priority_weight": 0, 00:05:58.176 "nvme_adminq_poll_period_us": 10000, 00:05:58.176 "nvme_ioq_poll_period_us": 0, 00:05:58.176 "io_queue_requests": 0, 00:05:58.176 "delay_cmd_submit": true, 00:05:58.176 "transport_retry_count": 4, 00:05:58.176 "bdev_retry_count": 3, 00:05:58.176 "transport_ack_timeout": 0, 00:05:58.176 "ctrlr_loss_timeout_sec": 0, 00:05:58.176 "reconnect_delay_sec": 0, 00:05:58.176 "fast_io_fail_timeout_sec": 0, 00:05:58.176 "disable_auto_failback": false, 00:05:58.176 "generate_uuids": false, 00:05:58.176 "transport_tos": 0, 00:05:58.176 "nvme_error_stat": false, 00:05:58.176 "rdma_srq_size": 0, 00:05:58.176 "io_path_stat": false, 00:05:58.176 "allow_accel_sequence": false, 00:05:58.176 "rdma_max_cq_size": 0, 00:05:58.176 "rdma_cm_event_timeout_ms": 0, 00:05:58.176 "dhchap_digests": [ 00:05:58.176 "sha256", 00:05:58.176 "sha384", 00:05:58.176 "sha512" 00:05:58.176 ], 00:05:58.176 "dhchap_dhgroups": [ 00:05:58.176 "null", 00:05:58.176 "ffdhe2048", 00:05:58.176 "ffdhe3072", 00:05:58.176 "ffdhe4096", 00:05:58.176 "ffdhe6144", 00:05:58.176 "ffdhe8192" 00:05:58.176 ] 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "bdev_nvme_set_hotplug", 00:05:58.176 "params": { 00:05:58.176 "period_us": 100000, 00:05:58.176 "enable": false 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "bdev_wait_for_examine" 00:05:58.176 } 00:05:58.176 ] 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "scsi", 00:05:58.176 "config": null 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "scheduler", 00:05:58.176 "config": [ 00:05:58.176 { 00:05:58.176 "method": "framework_set_scheduler", 00:05:58.176 "params": { 00:05:58.176 "name": "static" 00:05:58.176 } 00:05:58.176 } 00:05:58.176 ] 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "vhost_scsi", 00:05:58.176 "config": [] 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "vhost_blk", 00:05:58.176 "config": [] 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "ublk", 00:05:58.176 "config": [] 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "nbd", 00:05:58.176 "config": [] 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "nvmf", 00:05:58.176 "config": [ 00:05:58.176 { 00:05:58.176 "method": "nvmf_set_config", 00:05:58.176 "params": { 00:05:58.176 "discovery_filter": "match_any", 00:05:58.176 "admin_cmd_passthru": { 00:05:58.176 "identify_ctrlr": false 00:05:58.176 }, 00:05:58.176 "dhchap_digests": [ 00:05:58.176 "sha256", 00:05:58.176 "sha384", 00:05:58.176 "sha512" 00:05:58.176 ], 00:05:58.176 "dhchap_dhgroups": [ 00:05:58.176 "null", 00:05:58.176 "ffdhe2048", 00:05:58.176 "ffdhe3072", 00:05:58.176 "ffdhe4096", 00:05:58.176 "ffdhe6144", 00:05:58.176 "ffdhe8192" 00:05:58.176 ] 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "nvmf_set_max_subsystems", 00:05:58.176 "params": { 00:05:58.176 "max_subsystems": 1024 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "nvmf_set_crdt", 00:05:58.176 "params": { 00:05:58.176 "crdt1": 0, 00:05:58.176 "crdt2": 0, 00:05:58.176 "crdt3": 0 00:05:58.176 } 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "method": "nvmf_create_transport", 00:05:58.176 "params": { 00:05:58.176 "trtype": "TCP", 00:05:58.176 "max_queue_depth": 128, 00:05:58.176 "max_io_qpairs_per_ctrlr": 127, 00:05:58.176 "in_capsule_data_size": 4096, 00:05:58.176 "max_io_size": 131072, 00:05:58.176 "io_unit_size": 131072, 00:05:58.176 "max_aq_depth": 128, 00:05:58.176 "num_shared_buffers": 511, 00:05:58.176 "buf_cache_size": 4294967295, 00:05:58.176 "dif_insert_or_strip": false, 00:05:58.176 "zcopy": false, 00:05:58.176 "c2h_success": true, 00:05:58.176 "sock_priority": 0, 00:05:58.176 "abort_timeout_sec": 1, 00:05:58.176 "ack_timeout": 0, 00:05:58.176 "data_wr_pool_size": 0 00:05:58.176 } 00:05:58.176 } 00:05:58.176 ] 00:05:58.176 }, 00:05:58.176 { 00:05:58.176 "subsystem": "iscsi", 00:05:58.176 "config": [ 00:05:58.176 { 00:05:58.176 "method": "iscsi_set_options", 00:05:58.176 "params": { 00:05:58.176 "node_base": "iqn.2016-06.io.spdk", 00:05:58.176 "max_sessions": 128, 00:05:58.176 "max_connections_per_session": 2, 00:05:58.176 "max_queue_depth": 64, 00:05:58.176 "default_time2wait": 2, 00:05:58.176 "default_time2retain": 20, 00:05:58.176 "first_burst_length": 8192, 00:05:58.176 "immediate_data": true, 00:05:58.176 "allow_duplicated_isid": false, 00:05:58.176 "error_recovery_level": 0, 00:05:58.176 "nop_timeout": 60, 00:05:58.176 "nop_in_interval": 30, 00:05:58.176 "disable_chap": false, 00:05:58.176 "require_chap": false, 00:05:58.176 "mutual_chap": false, 00:05:58.176 "chap_group": 0, 00:05:58.176 "max_large_datain_per_connection": 64, 00:05:58.176 "max_r2t_per_connection": 4, 00:05:58.176 "pdu_pool_size": 36864, 00:05:58.176 "immediate_data_pool_size": 16384, 00:05:58.176 "data_out_pool_size": 2048 00:05:58.176 } 00:05:58.176 } 00:05:58.176 ] 00:05:58.176 } 00:05:58.176 ] 00:05:58.176 } 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69215 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69215 ']' 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69215 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69215 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.176 killing process with pid 69215 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69215' 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69215 00:05:58.176 00:21:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69215 00:05:58.436 00:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69243 00:05:58.436 00:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.436 00:21:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:03.710 00:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69243 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69243 ']' 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69243 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69243 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.711 killing process with pid 69243 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69243' 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69243 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69243 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.711 00:06:03.711 real 0m6.758s 00:06:03.711 user 0m6.726s 00:06:03.711 sys 0m0.435s 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.711 ************************************ 00:06:03.711 END TEST skip_rpc_with_json 00:06:03.711 ************************************ 00:06:03.711 00:21:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:03.711 00:21:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.711 00:21:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.711 00:21:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.711 ************************************ 00:06:03.711 START TEST skip_rpc_with_delay 00:06:03.711 ************************************ 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:03.711 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.970 [2024-12-17 00:21:49.713868] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:03.970 [2024-12-17 00:21:49.713994] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:03.970 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:03.970 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.970 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.970 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.970 00:06:03.970 real 0m0.087s 00:06:03.970 user 0m0.054s 00:06:03.970 sys 0m0.032s 00:06:03.970 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.970 00:21:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:03.970 ************************************ 00:06:03.970 END TEST skip_rpc_with_delay 00:06:03.970 ************************************ 00:06:03.970 00:21:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.970 00:21:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.970 00:21:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.970 00:21:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.970 00:21:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.970 00:21:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.970 ************************************ 00:06:03.970 START TEST exit_on_failed_rpc_init 00:06:03.970 ************************************ 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69352 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69352 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69352 ']' 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.970 00:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.970 [2024-12-17 00:21:49.855021] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:03.970 [2024-12-17 00:21:49.855138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69352 ] 00:06:04.229 [2024-12-17 00:21:49.984183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.229 [2024-12-17 00:21:50.020066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.229 [2024-12-17 00:21:50.057214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:05.167 00:21:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:05.167 [2024-12-17 00:21:50.903415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.167 [2024-12-17 00:21:50.903527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69370 ] 00:06:05.167 [2024-12-17 00:21:51.043833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.167 [2024-12-17 00:21:51.083764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.167 [2024-12-17 00:21:51.083883] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:05.167 [2024-12-17 00:21:51.083900] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:05.167 [2024-12-17 00:21:51.083910] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69352 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69352 ']' 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69352 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.167 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69352 00:06:05.427 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.427 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.427 killing process with pid 69352 00:06:05.427 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69352' 00:06:05.427 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69352 00:06:05.427 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69352 00:06:05.427 00:06:05.427 real 0m1.631s 00:06:05.427 user 0m1.997s 00:06:05.427 sys 0m0.296s 00:06:05.427 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.427 00:21:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.427 ************************************ 00:06:05.427 END TEST exit_on_failed_rpc_init 00:06:05.427 ************************************ 00:06:05.686 00:21:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.686 00:06:05.686 real 0m14.149s 00:06:05.686 user 0m13.945s 00:06:05.686 sys 0m1.163s 00:06:05.686 00:21:51 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.686 00:21:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.686 ************************************ 00:06:05.686 END TEST skip_rpc 00:06:05.686 ************************************ 00:06:05.686 00:21:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.686 00:21:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.686 00:21:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.686 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.686 ************************************ 00:06:05.686 START TEST rpc_client 00:06:05.686 ************************************ 00:06:05.686 00:21:51 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.686 * Looking for test storage... 00:06:05.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:05.686 00:21:51 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.686 00:21:51 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.686 00:21:51 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.686 00:21:51 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.686 00:21:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.946 00:21:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:05.946 00:21:51 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.946 00:21:51 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.946 --rc genhtml_branch_coverage=1 00:06:05.946 --rc genhtml_function_coverage=1 00:06:05.946 --rc genhtml_legend=1 00:06:05.946 --rc geninfo_all_blocks=1 00:06:05.946 --rc geninfo_unexecuted_blocks=1 00:06:05.946 00:06:05.946 ' 00:06:05.946 00:21:51 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.946 --rc genhtml_branch_coverage=1 00:06:05.946 --rc genhtml_function_coverage=1 00:06:05.946 --rc genhtml_legend=1 00:06:05.946 --rc geninfo_all_blocks=1 00:06:05.946 --rc geninfo_unexecuted_blocks=1 00:06:05.946 00:06:05.946 ' 00:06:05.946 00:21:51 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.946 --rc genhtml_branch_coverage=1 00:06:05.946 --rc genhtml_function_coverage=1 00:06:05.946 --rc genhtml_legend=1 00:06:05.946 --rc geninfo_all_blocks=1 00:06:05.946 --rc geninfo_unexecuted_blocks=1 00:06:05.946 00:06:05.946 ' 00:06:05.946 00:21:51 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.946 --rc genhtml_branch_coverage=1 00:06:05.946 --rc genhtml_function_coverage=1 00:06:05.946 --rc genhtml_legend=1 00:06:05.946 --rc geninfo_all_blocks=1 00:06:05.946 --rc geninfo_unexecuted_blocks=1 00:06:05.946 00:06:05.946 ' 00:06:05.946 00:21:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:05.946 OK 00:06:05.946 00:21:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.946 00:06:05.946 real 0m0.200s 00:06:05.946 user 0m0.123s 00:06:05.946 sys 0m0.086s 00:06:05.946 ************************************ 00:06:05.946 END TEST rpc_client 00:06:05.946 ************************************ 00:06:05.946 00:21:51 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.946 00:21:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:05.946 00:21:51 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.946 00:21:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.946 00:21:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.946 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.946 ************************************ 00:06:05.946 START TEST json_config 00:06:05.946 ************************************ 00:06:05.946 00:21:51 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.946 00:21:51 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.946 00:21:51 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.946 00:21:51 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.946 00:21:51 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.946 00:21:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.946 00:21:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.946 00:21:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.946 00:21:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.946 00:21:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.946 00:21:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.946 00:21:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.946 00:21:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.946 00:21:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.946 00:21:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.946 00:21:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.946 00:21:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:05.946 00:21:51 json_config -- scripts/common.sh@345 -- # : 1 00:06:05.946 00:21:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.946 00:21:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.946 00:21:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:05.946 00:21:51 json_config -- scripts/common.sh@353 -- # local d=1 00:06:05.946 00:21:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.946 00:21:51 json_config -- scripts/common.sh@355 -- # echo 1 00:06:05.946 00:21:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.946 00:21:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:05.946 00:21:51 json_config -- scripts/common.sh@353 -- # local d=2 00:06:05.946 00:21:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.946 00:21:51 json_config -- scripts/common.sh@355 -- # echo 2 00:06:06.206 00:21:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.206 00:21:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.206 00:21:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.206 00:21:51 json_config -- scripts/common.sh@368 -- # return 0 00:06:06.206 00:21:51 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.206 00:21:51 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.206 --rc genhtml_branch_coverage=1 00:06:06.206 --rc genhtml_function_coverage=1 00:06:06.206 --rc genhtml_legend=1 00:06:06.206 --rc geninfo_all_blocks=1 00:06:06.206 --rc geninfo_unexecuted_blocks=1 00:06:06.206 00:06:06.206 ' 00:06:06.206 00:21:51 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.206 --rc genhtml_branch_coverage=1 00:06:06.206 --rc genhtml_function_coverage=1 00:06:06.206 --rc genhtml_legend=1 00:06:06.206 --rc geninfo_all_blocks=1 00:06:06.206 --rc geninfo_unexecuted_blocks=1 00:06:06.206 00:06:06.206 ' 00:06:06.206 00:21:51 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.206 --rc genhtml_branch_coverage=1 00:06:06.206 --rc genhtml_function_coverage=1 00:06:06.206 --rc genhtml_legend=1 00:06:06.206 --rc geninfo_all_blocks=1 00:06:06.206 --rc geninfo_unexecuted_blocks=1 00:06:06.206 00:06:06.206 ' 00:06:06.206 00:21:51 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.206 --rc genhtml_branch_coverage=1 00:06:06.206 --rc genhtml_function_coverage=1 00:06:06.206 --rc genhtml_legend=1 00:06:06.206 --rc geninfo_all_blocks=1 00:06:06.206 --rc geninfo_unexecuted_blocks=1 00:06:06.206 00:06:06.206 ' 00:06:06.206 00:21:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.206 00:21:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.206 00:21:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.206 00:21:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.206 00:21:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.206 00:21:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.206 00:21:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.206 00:21:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.206 00:21:51 json_config -- paths/export.sh@5 -- # export PATH 00:06:06.206 00:21:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.206 00:21:51 json_config -- nvmf/common.sh@51 -- # : 0 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.207 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.207 00:21:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:06.207 INFO: JSON configuration test init 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.207 00:21:51 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:06.207 00:21:51 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.207 00:21:51 json_config -- json_config/common.sh@10 -- # shift 00:06:06.207 00:21:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.207 00:21:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.207 00:21:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.207 00:21:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.207 00:21:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.207 00:21:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69504 00:06:06.207 00:21:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.207 Waiting for target to run... 00:06:06.207 00:21:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:06.207 00:21:51 json_config -- json_config/common.sh@25 -- # waitforlisten 69504 /var/tmp/spdk_tgt.sock 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@831 -- # '[' -z 69504 ']' 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.207 00:21:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.207 [2024-12-17 00:21:52.061720] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.207 [2024-12-17 00:21:52.062006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69504 ] 00:06:06.466 [2024-12-17 00:21:52.381156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.466 [2024-12-17 00:21:52.402142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.403 00:06:07.403 00:21:53 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.403 00:21:53 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:07.403 00:21:53 json_config -- json_config/common.sh@26 -- # echo '' 00:06:07.403 00:21:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:07.403 00:21:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:07.403 00:21:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.403 00:21:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.403 00:21:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:07.403 00:21:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:07.403 00:21:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.403 00:21:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.403 00:21:53 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:07.403 00:21:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:07.403 00:21:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:07.662 [2024-12-17 00:21:53.409399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:07.662 00:21:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.662 00:21:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:07.662 00:21:53 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:07.663 00:21:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:07.663 00:21:53 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@54 -- # sort 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:07.922 00:21:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.922 00:21:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:07.922 00:21:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.922 00:21:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:07.922 00:21:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:07.922 00:21:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.182 MallocForNvmf0 00:06:08.182 00:21:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.182 00:21:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.441 MallocForNvmf1 00:06:08.441 00:21:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.441 00:21:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.701 [2024-12-17 00:21:54.587304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.701 00:21:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.701 00:21:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.960 00:21:54 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:08.960 00:21:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.226 00:21:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.226 00:21:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.484 00:21:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.484 00:21:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.743 [2024-12-17 00:21:55.551817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.743 00:21:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:09.743 00:21:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.743 00:21:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.743 00:21:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:09.743 00:21:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.743 00:21:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.743 00:21:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:09.743 00:21:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:09.743 00:21:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.002 MallocBdevForConfigChangeCheck 00:06:10.002 00:21:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:10.002 00:21:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.002 00:21:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.002 00:21:55 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:10.002 00:21:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.570 INFO: shutting down applications... 00:06:10.570 00:21:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:10.570 00:21:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:10.570 00:21:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:10.570 00:21:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:10.570 00:21:56 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:10.828 Calling clear_iscsi_subsystem 00:06:10.828 Calling clear_nvmf_subsystem 00:06:10.828 Calling clear_nbd_subsystem 00:06:10.828 Calling clear_ublk_subsystem 00:06:10.828 Calling clear_vhost_blk_subsystem 00:06:10.828 Calling clear_vhost_scsi_subsystem 00:06:10.828 Calling clear_bdev_subsystem 00:06:10.828 00:21:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:10.829 00:21:56 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:10.829 00:21:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:10.829 00:21:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.829 00:21:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:10.829 00:21:56 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:11.088 00:21:57 json_config -- json_config/json_config.sh@352 -- # break 00:06:11.088 00:21:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:11.088 00:21:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:11.088 00:21:57 json_config -- json_config/common.sh@31 -- # local app=target 00:06:11.088 00:21:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.088 00:21:57 json_config -- json_config/common.sh@35 -- # [[ -n 69504 ]] 00:06:11.088 00:21:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69504 00:06:11.088 00:21:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.088 00:21:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.088 00:21:57 json_config -- json_config/common.sh@41 -- # kill -0 69504 00:06:11.088 00:21:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.656 00:21:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.656 00:21:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.657 00:21:57 json_config -- json_config/common.sh@41 -- # kill -0 69504 00:06:11.657 SPDK target shutdown done 00:06:11.657 INFO: relaunching applications... 00:06:11.657 00:21:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.657 00:21:57 json_config -- json_config/common.sh@43 -- # break 00:06:11.657 00:21:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.657 00:21:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.657 00:21:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:11.657 00:21:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.657 00:21:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:11.657 00:21:57 json_config -- json_config/common.sh@10 -- # shift 00:06:11.657 00:21:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.657 00:21:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.657 00:21:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.657 00:21:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.657 00:21:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.657 00:21:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69700 00:06:11.657 Waiting for target to run... 00:06:11.657 00:21:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.657 00:21:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:11.657 00:21:57 json_config -- json_config/common.sh@25 -- # waitforlisten 69700 /var/tmp/spdk_tgt.sock 00:06:11.657 00:21:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 69700 ']' 00:06:11.657 00:21:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.657 00:21:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.657 00:21:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.657 00:21:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.657 00:21:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.657 [2024-12-17 00:21:57.646962] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.657 [2024-12-17 00:21:57.647084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69700 ] 00:06:12.223 [2024-12-17 00:21:57.952816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.223 [2024-12-17 00:21:57.978944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.223 [2024-12-17 00:21:58.106824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.482 [2024-12-17 00:21:58.295307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.482 [2024-12-17 00:21:58.327411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:12.741 00:06:12.741 INFO: Checking if target configuration is the same... 00:06:12.741 00:21:58 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.741 00:21:58 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:12.741 00:21:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.741 00:21:58 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:12.741 00:21:58 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:12.741 00:21:58 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:12.741 00:21:58 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.741 00:21:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.741 + '[' 2 -ne 2 ']' 00:06:12.741 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:12.741 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:12.741 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:12.741 +++ basename /dev/fd/62 00:06:12.741 ++ mktemp /tmp/62.XXX 00:06:12.741 + tmp_file_1=/tmp/62.IXr 00:06:12.741 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.741 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.741 + tmp_file_2=/tmp/spdk_tgt_config.json.xmX 00:06:12.741 + ret=0 00:06:12.741 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.309 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.309 + diff -u /tmp/62.IXr /tmp/spdk_tgt_config.json.xmX 00:06:13.309 INFO: JSON config files are the same 00:06:13.309 + echo 'INFO: JSON config files are the same' 00:06:13.309 + rm /tmp/62.IXr /tmp/spdk_tgt_config.json.xmX 00:06:13.309 + exit 0 00:06:13.309 INFO: changing configuration and checking if this can be detected... 00:06:13.309 00:21:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:13.309 00:21:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:13.309 00:21:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.309 00:21:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.568 00:21:59 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.568 00:21:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:13.568 00:21:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.568 + '[' 2 -ne 2 ']' 00:06:13.568 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:13.568 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:13.568 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:13.568 +++ basename /dev/fd/62 00:06:13.568 ++ mktemp /tmp/62.XXX 00:06:13.568 + tmp_file_1=/tmp/62.LDz 00:06:13.568 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.568 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:13.568 + tmp_file_2=/tmp/spdk_tgt_config.json.PBi 00:06:13.568 + ret=0 00:06:13.568 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.827 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.827 + diff -u /tmp/62.LDz /tmp/spdk_tgt_config.json.PBi 00:06:13.827 + ret=1 00:06:13.827 + echo '=== Start of file: /tmp/62.LDz ===' 00:06:13.827 + cat /tmp/62.LDz 00:06:13.827 + echo '=== End of file: /tmp/62.LDz ===' 00:06:13.827 + echo '' 00:06:13.827 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PBi ===' 00:06:13.827 + cat /tmp/spdk_tgt_config.json.PBi 00:06:13.827 + echo '=== End of file: /tmp/spdk_tgt_config.json.PBi ===' 00:06:13.827 + echo '' 00:06:13.827 + rm /tmp/62.LDz /tmp/spdk_tgt_config.json.PBi 00:06:13.827 + exit 1 00:06:13.827 INFO: configuration change detected. 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:13.827 00:21:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.827 00:21:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@324 -- # [[ -n 69700 ]] 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.827 00:21:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.827 00:21:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:13.827 00:21:59 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.827 00:21:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.827 00:21:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.087 00:21:59 json_config -- json_config/json_config.sh@330 -- # killprocess 69700 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@950 -- # '[' -z 69700 ']' 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@954 -- # kill -0 69700 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@955 -- # uname 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69700 00:06:14.087 killing process with pid 69700 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69700' 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@969 -- # kill 69700 00:06:14.087 00:21:59 json_config -- common/autotest_common.sh@974 -- # wait 69700 00:06:14.087 00:22:00 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:14.087 00:22:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:14.087 00:22:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.087 00:22:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.346 INFO: Success 00:06:14.346 00:22:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:14.346 00:22:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:14.346 ************************************ 00:06:14.346 END TEST json_config 00:06:14.346 ************************************ 00:06:14.346 00:06:14.346 real 0m8.340s 00:06:14.346 user 0m12.039s 00:06:14.346 sys 0m1.484s 00:06:14.346 00:22:00 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.346 00:22:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.347 00:22:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:14.347 00:22:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.347 00:22:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.347 00:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:14.347 ************************************ 00:06:14.347 START TEST json_config_extra_key 00:06:14.347 ************************************ 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.347 --rc genhtml_branch_coverage=1 00:06:14.347 --rc genhtml_function_coverage=1 00:06:14.347 --rc genhtml_legend=1 00:06:14.347 --rc geninfo_all_blocks=1 00:06:14.347 --rc geninfo_unexecuted_blocks=1 00:06:14.347 00:06:14.347 ' 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.347 --rc genhtml_branch_coverage=1 00:06:14.347 --rc genhtml_function_coverage=1 00:06:14.347 --rc genhtml_legend=1 00:06:14.347 --rc geninfo_all_blocks=1 00:06:14.347 --rc geninfo_unexecuted_blocks=1 00:06:14.347 00:06:14.347 ' 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.347 --rc genhtml_branch_coverage=1 00:06:14.347 --rc genhtml_function_coverage=1 00:06:14.347 --rc genhtml_legend=1 00:06:14.347 --rc geninfo_all_blocks=1 00:06:14.347 --rc geninfo_unexecuted_blocks=1 00:06:14.347 00:06:14.347 ' 00:06:14.347 00:22:00 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.347 --rc genhtml_branch_coverage=1 00:06:14.347 --rc genhtml_function_coverage=1 00:06:14.347 --rc genhtml_legend=1 00:06:14.347 --rc geninfo_all_blocks=1 00:06:14.347 --rc geninfo_unexecuted_blocks=1 00:06:14.347 00:06:14.347 ' 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.347 00:22:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.347 00:22:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.347 00:22:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.347 00:22:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.347 00:22:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:14.347 00:22:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.347 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.347 00:22:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:14.347 INFO: launching applications... 00:06:14.347 00:22:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:14.347 00:22:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69848 00:06:14.348 Waiting for target to run... 00:06:14.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69848 /var/tmp/spdk_tgt.sock 00:06:14.348 00:22:00 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69848 ']' 00:06:14.348 00:22:00 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:14.348 00:22:00 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.348 00:22:00 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.348 00:22:00 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.348 00:22:00 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.348 00:22:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.607 [2024-12-17 00:22:00.409980] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:14.607 [2024-12-17 00:22:00.410084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69848 ] 00:06:14.865 [2024-12-17 00:22:00.717638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.865 [2024-12-17 00:22:00.738514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.865 [2024-12-17 00:22:00.761071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.433 00:06:15.433 INFO: shutting down applications... 00:06:15.433 00:22:01 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.433 00:22:01 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.433 00:22:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.433 00:22:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69848 ]] 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69848 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69848 00:06:15.433 00:22:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.000 00:22:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.000 00:22:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.000 00:22:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69848 00:06:16.000 00:22:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:16.000 00:22:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:16.000 00:22:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:16.000 00:22:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:16.000 SPDK target shutdown done 00:06:16.000 Success 00:06:16.000 00:22:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:16.000 00:06:16.000 real 0m1.786s 00:06:16.000 user 0m1.658s 00:06:16.000 sys 0m0.323s 00:06:16.000 00:22:01 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.000 00:22:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:16.000 ************************************ 00:06:16.000 END TEST json_config_extra_key 00:06:16.000 ************************************ 00:06:16.000 00:22:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.000 00:22:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.000 00:22:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.000 00:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:16.000 ************************************ 00:06:16.000 START TEST alias_rpc 00:06:16.000 ************************************ 00:06:16.000 00:22:01 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.260 * Looking for test storage... 00:06:16.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.260 00:22:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.260 --rc genhtml_branch_coverage=1 00:06:16.260 --rc genhtml_function_coverage=1 00:06:16.260 --rc genhtml_legend=1 00:06:16.260 --rc geninfo_all_blocks=1 00:06:16.260 --rc geninfo_unexecuted_blocks=1 00:06:16.260 00:06:16.260 ' 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.260 --rc genhtml_branch_coverage=1 00:06:16.260 --rc genhtml_function_coverage=1 00:06:16.260 --rc genhtml_legend=1 00:06:16.260 --rc geninfo_all_blocks=1 00:06:16.260 --rc geninfo_unexecuted_blocks=1 00:06:16.260 00:06:16.260 ' 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.260 --rc genhtml_branch_coverage=1 00:06:16.260 --rc genhtml_function_coverage=1 00:06:16.260 --rc genhtml_legend=1 00:06:16.260 --rc geninfo_all_blocks=1 00:06:16.260 --rc geninfo_unexecuted_blocks=1 00:06:16.260 00:06:16.260 ' 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.260 --rc genhtml_branch_coverage=1 00:06:16.260 --rc genhtml_function_coverage=1 00:06:16.260 --rc genhtml_legend=1 00:06:16.260 --rc geninfo_all_blocks=1 00:06:16.260 --rc geninfo_unexecuted_blocks=1 00:06:16.260 00:06:16.260 ' 00:06:16.260 00:22:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.260 00:22:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69926 00:06:16.260 00:22:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:16.260 00:22:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69926 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69926 ']' 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.260 00:22:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.260 [2024-12-17 00:22:02.221832] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:16.260 [2024-12-17 00:22:02.221922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69926 ] 00:06:16.520 [2024-12-17 00:22:02.353009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.520 [2024-12-17 00:22:02.386084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.520 [2024-12-17 00:22:02.420238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.779 00:22:02 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.779 00:22:02 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.779 00:22:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:17.039 00:22:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69926 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69926 ']' 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69926 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69926 00:06:17.039 killing process with pid 69926 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69926' 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@969 -- # kill 69926 00:06:17.039 00:22:02 alias_rpc -- common/autotest_common.sh@974 -- # wait 69926 00:06:17.298 ************************************ 00:06:17.298 END TEST alias_rpc 00:06:17.298 ************************************ 00:06:17.298 00:06:17.298 real 0m1.153s 00:06:17.298 user 0m1.380s 00:06:17.298 sys 0m0.305s 00:06:17.298 00:22:03 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.298 00:22:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.298 00:22:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:17.298 00:22:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:17.298 00:22:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.298 00:22:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.298 00:22:03 -- common/autotest_common.sh@10 -- # set +x 00:06:17.298 ************************************ 00:06:17.298 START TEST spdkcli_tcp 00:06:17.298 ************************************ 00:06:17.298 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:17.298 * Looking for test storage... 00:06:17.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:17.298 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:17.298 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:17.298 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:17.557 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.557 00:22:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:17.557 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.557 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:17.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.557 --rc genhtml_branch_coverage=1 00:06:17.557 --rc genhtml_function_coverage=1 00:06:17.557 --rc genhtml_legend=1 00:06:17.557 --rc geninfo_all_blocks=1 00:06:17.557 --rc geninfo_unexecuted_blocks=1 00:06:17.557 00:06:17.557 ' 00:06:17.557 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:17.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.557 --rc genhtml_branch_coverage=1 00:06:17.557 --rc genhtml_function_coverage=1 00:06:17.557 --rc genhtml_legend=1 00:06:17.557 --rc geninfo_all_blocks=1 00:06:17.557 --rc geninfo_unexecuted_blocks=1 00:06:17.557 00:06:17.557 ' 00:06:17.557 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:17.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.557 --rc genhtml_branch_coverage=1 00:06:17.557 --rc genhtml_function_coverage=1 00:06:17.557 --rc genhtml_legend=1 00:06:17.557 --rc geninfo_all_blocks=1 00:06:17.557 --rc geninfo_unexecuted_blocks=1 00:06:17.557 00:06:17.557 ' 00:06:17.557 00:22:03 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:17.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.557 --rc genhtml_branch_coverage=1 00:06:17.557 --rc genhtml_function_coverage=1 00:06:17.557 --rc genhtml_legend=1 00:06:17.557 --rc geninfo_all_blocks=1 00:06:17.557 --rc geninfo_unexecuted_blocks=1 00:06:17.557 00:06:17.557 ' 00:06:17.557 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:17.557 00:22:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:17.557 00:22:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:17.558 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:17.558 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:17.558 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:17.558 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.558 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69997 00:06:17.558 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:17.558 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69997 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69997 ']' 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.558 00:22:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.558 [2024-12-17 00:22:03.468612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:17.558 [2024-12-17 00:22:03.468892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69997 ] 00:06:17.816 [2024-12-17 00:22:03.605641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.816 [2024-12-17 00:22:03.641716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.816 [2024-12-17 00:22:03.641724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.816 [2024-12-17 00:22:03.680985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.816 00:22:03 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.816 00:22:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:17.816 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70007 00:06:17.816 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.816 00:22:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.385 [ 00:06:18.385 "bdev_malloc_delete", 00:06:18.385 "bdev_malloc_create", 00:06:18.385 "bdev_null_resize", 00:06:18.385 "bdev_null_delete", 00:06:18.385 "bdev_null_create", 00:06:18.385 "bdev_nvme_cuse_unregister", 00:06:18.385 "bdev_nvme_cuse_register", 00:06:18.385 "bdev_opal_new_user", 00:06:18.385 "bdev_opal_set_lock_state", 00:06:18.385 "bdev_opal_delete", 00:06:18.385 "bdev_opal_get_info", 00:06:18.385 "bdev_opal_create", 00:06:18.385 "bdev_nvme_opal_revert", 00:06:18.385 "bdev_nvme_opal_init", 00:06:18.385 "bdev_nvme_send_cmd", 00:06:18.385 "bdev_nvme_set_keys", 00:06:18.385 "bdev_nvme_get_path_iostat", 00:06:18.385 "bdev_nvme_get_mdns_discovery_info", 00:06:18.385 "bdev_nvme_stop_mdns_discovery", 00:06:18.385 "bdev_nvme_start_mdns_discovery", 00:06:18.385 "bdev_nvme_set_multipath_policy", 00:06:18.385 "bdev_nvme_set_preferred_path", 00:06:18.385 "bdev_nvme_get_io_paths", 00:06:18.385 "bdev_nvme_remove_error_injection", 00:06:18.385 "bdev_nvme_add_error_injection", 00:06:18.385 "bdev_nvme_get_discovery_info", 00:06:18.385 "bdev_nvme_stop_discovery", 00:06:18.385 "bdev_nvme_start_discovery", 00:06:18.385 "bdev_nvme_get_controller_health_info", 00:06:18.385 "bdev_nvme_disable_controller", 00:06:18.385 "bdev_nvme_enable_controller", 00:06:18.385 "bdev_nvme_reset_controller", 00:06:18.385 "bdev_nvme_get_transport_statistics", 00:06:18.385 "bdev_nvme_apply_firmware", 00:06:18.385 "bdev_nvme_detach_controller", 00:06:18.385 "bdev_nvme_get_controllers", 00:06:18.385 "bdev_nvme_attach_controller", 00:06:18.385 "bdev_nvme_set_hotplug", 00:06:18.385 "bdev_nvme_set_options", 00:06:18.385 "bdev_passthru_delete", 00:06:18.385 "bdev_passthru_create", 00:06:18.385 "bdev_lvol_set_parent_bdev", 00:06:18.385 "bdev_lvol_set_parent", 00:06:18.385 "bdev_lvol_check_shallow_copy", 00:06:18.385 "bdev_lvol_start_shallow_copy", 00:06:18.385 "bdev_lvol_grow_lvstore", 00:06:18.385 "bdev_lvol_get_lvols", 00:06:18.385 "bdev_lvol_get_lvstores", 00:06:18.385 "bdev_lvol_delete", 00:06:18.385 "bdev_lvol_set_read_only", 00:06:18.385 "bdev_lvol_resize", 00:06:18.385 "bdev_lvol_decouple_parent", 00:06:18.385 "bdev_lvol_inflate", 00:06:18.385 "bdev_lvol_rename", 00:06:18.385 "bdev_lvol_clone_bdev", 00:06:18.385 "bdev_lvol_clone", 00:06:18.385 "bdev_lvol_snapshot", 00:06:18.385 "bdev_lvol_create", 00:06:18.385 "bdev_lvol_delete_lvstore", 00:06:18.385 "bdev_lvol_rename_lvstore", 00:06:18.385 "bdev_lvol_create_lvstore", 00:06:18.385 "bdev_raid_set_options", 00:06:18.385 "bdev_raid_remove_base_bdev", 00:06:18.385 "bdev_raid_add_base_bdev", 00:06:18.385 "bdev_raid_delete", 00:06:18.385 "bdev_raid_create", 00:06:18.385 "bdev_raid_get_bdevs", 00:06:18.385 "bdev_error_inject_error", 00:06:18.385 "bdev_error_delete", 00:06:18.385 "bdev_error_create", 00:06:18.385 "bdev_split_delete", 00:06:18.385 "bdev_split_create", 00:06:18.385 "bdev_delay_delete", 00:06:18.385 "bdev_delay_create", 00:06:18.385 "bdev_delay_update_latency", 00:06:18.385 "bdev_zone_block_delete", 00:06:18.385 "bdev_zone_block_create", 00:06:18.385 "blobfs_create", 00:06:18.385 "blobfs_detect", 00:06:18.385 "blobfs_set_cache_size", 00:06:18.385 "bdev_aio_delete", 00:06:18.385 "bdev_aio_rescan", 00:06:18.385 "bdev_aio_create", 00:06:18.385 "bdev_ftl_set_property", 00:06:18.385 "bdev_ftl_get_properties", 00:06:18.385 "bdev_ftl_get_stats", 00:06:18.385 "bdev_ftl_unmap", 00:06:18.385 "bdev_ftl_unload", 00:06:18.385 "bdev_ftl_delete", 00:06:18.385 "bdev_ftl_load", 00:06:18.385 "bdev_ftl_create", 00:06:18.385 "bdev_virtio_attach_controller", 00:06:18.385 "bdev_virtio_scsi_get_devices", 00:06:18.385 "bdev_virtio_detach_controller", 00:06:18.385 "bdev_virtio_blk_set_hotplug", 00:06:18.385 "bdev_iscsi_delete", 00:06:18.385 "bdev_iscsi_create", 00:06:18.385 "bdev_iscsi_set_options", 00:06:18.385 "bdev_uring_delete", 00:06:18.385 "bdev_uring_rescan", 00:06:18.385 "bdev_uring_create", 00:06:18.385 "accel_error_inject_error", 00:06:18.385 "ioat_scan_accel_module", 00:06:18.385 "dsa_scan_accel_module", 00:06:18.385 "iaa_scan_accel_module", 00:06:18.385 "keyring_file_remove_key", 00:06:18.385 "keyring_file_add_key", 00:06:18.385 "keyring_linux_set_options", 00:06:18.385 "fsdev_aio_delete", 00:06:18.385 "fsdev_aio_create", 00:06:18.385 "iscsi_get_histogram", 00:06:18.385 "iscsi_enable_histogram", 00:06:18.385 "iscsi_set_options", 00:06:18.385 "iscsi_get_auth_groups", 00:06:18.385 "iscsi_auth_group_remove_secret", 00:06:18.385 "iscsi_auth_group_add_secret", 00:06:18.385 "iscsi_delete_auth_group", 00:06:18.385 "iscsi_create_auth_group", 00:06:18.385 "iscsi_set_discovery_auth", 00:06:18.385 "iscsi_get_options", 00:06:18.385 "iscsi_target_node_request_logout", 00:06:18.385 "iscsi_target_node_set_redirect", 00:06:18.385 "iscsi_target_node_set_auth", 00:06:18.385 "iscsi_target_node_add_lun", 00:06:18.385 "iscsi_get_stats", 00:06:18.385 "iscsi_get_connections", 00:06:18.385 "iscsi_portal_group_set_auth", 00:06:18.385 "iscsi_start_portal_group", 00:06:18.385 "iscsi_delete_portal_group", 00:06:18.385 "iscsi_create_portal_group", 00:06:18.385 "iscsi_get_portal_groups", 00:06:18.385 "iscsi_delete_target_node", 00:06:18.385 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.385 "iscsi_target_node_add_pg_ig_maps", 00:06:18.385 "iscsi_create_target_node", 00:06:18.385 "iscsi_get_target_nodes", 00:06:18.385 "iscsi_delete_initiator_group", 00:06:18.385 "iscsi_initiator_group_remove_initiators", 00:06:18.385 "iscsi_initiator_group_add_initiators", 00:06:18.385 "iscsi_create_initiator_group", 00:06:18.385 "iscsi_get_initiator_groups", 00:06:18.385 "nvmf_set_crdt", 00:06:18.385 "nvmf_set_config", 00:06:18.385 "nvmf_set_max_subsystems", 00:06:18.385 "nvmf_stop_mdns_prr", 00:06:18.385 "nvmf_publish_mdns_prr", 00:06:18.385 "nvmf_subsystem_get_listeners", 00:06:18.385 "nvmf_subsystem_get_qpairs", 00:06:18.385 "nvmf_subsystem_get_controllers", 00:06:18.385 "nvmf_get_stats", 00:06:18.385 "nvmf_get_transports", 00:06:18.385 "nvmf_create_transport", 00:06:18.385 "nvmf_get_targets", 00:06:18.385 "nvmf_delete_target", 00:06:18.385 "nvmf_create_target", 00:06:18.385 "nvmf_subsystem_allow_any_host", 00:06:18.385 "nvmf_subsystem_set_keys", 00:06:18.385 "nvmf_subsystem_remove_host", 00:06:18.385 "nvmf_subsystem_add_host", 00:06:18.385 "nvmf_ns_remove_host", 00:06:18.385 "nvmf_ns_add_host", 00:06:18.385 "nvmf_subsystem_remove_ns", 00:06:18.385 "nvmf_subsystem_set_ns_ana_group", 00:06:18.385 "nvmf_subsystem_add_ns", 00:06:18.385 "nvmf_subsystem_listener_set_ana_state", 00:06:18.385 "nvmf_discovery_get_referrals", 00:06:18.385 "nvmf_discovery_remove_referral", 00:06:18.385 "nvmf_discovery_add_referral", 00:06:18.385 "nvmf_subsystem_remove_listener", 00:06:18.385 "nvmf_subsystem_add_listener", 00:06:18.385 "nvmf_delete_subsystem", 00:06:18.385 "nvmf_create_subsystem", 00:06:18.385 "nvmf_get_subsystems", 00:06:18.385 "env_dpdk_get_mem_stats", 00:06:18.385 "nbd_get_disks", 00:06:18.385 "nbd_stop_disk", 00:06:18.385 "nbd_start_disk", 00:06:18.385 "ublk_recover_disk", 00:06:18.385 "ublk_get_disks", 00:06:18.385 "ublk_stop_disk", 00:06:18.385 "ublk_start_disk", 00:06:18.385 "ublk_destroy_target", 00:06:18.385 "ublk_create_target", 00:06:18.385 "virtio_blk_create_transport", 00:06:18.385 "virtio_blk_get_transports", 00:06:18.385 "vhost_controller_set_coalescing", 00:06:18.385 "vhost_get_controllers", 00:06:18.385 "vhost_delete_controller", 00:06:18.385 "vhost_create_blk_controller", 00:06:18.385 "vhost_scsi_controller_remove_target", 00:06:18.385 "vhost_scsi_controller_add_target", 00:06:18.385 "vhost_start_scsi_controller", 00:06:18.385 "vhost_create_scsi_controller", 00:06:18.385 "thread_set_cpumask", 00:06:18.385 "scheduler_set_options", 00:06:18.385 "framework_get_governor", 00:06:18.385 "framework_get_scheduler", 00:06:18.385 "framework_set_scheduler", 00:06:18.385 "framework_get_reactors", 00:06:18.385 "thread_get_io_channels", 00:06:18.385 "thread_get_pollers", 00:06:18.385 "thread_get_stats", 00:06:18.385 "framework_monitor_context_switch", 00:06:18.385 "spdk_kill_instance", 00:06:18.385 "log_enable_timestamps", 00:06:18.385 "log_get_flags", 00:06:18.385 "log_clear_flag", 00:06:18.385 "log_set_flag", 00:06:18.385 "log_get_level", 00:06:18.385 "log_set_level", 00:06:18.385 "log_get_print_level", 00:06:18.385 "log_set_print_level", 00:06:18.385 "framework_enable_cpumask_locks", 00:06:18.385 "framework_disable_cpumask_locks", 00:06:18.385 "framework_wait_init", 00:06:18.385 "framework_start_init", 00:06:18.385 "scsi_get_devices", 00:06:18.385 "bdev_get_histogram", 00:06:18.385 "bdev_enable_histogram", 00:06:18.385 "bdev_set_qos_limit", 00:06:18.385 "bdev_set_qd_sampling_period", 00:06:18.385 "bdev_get_bdevs", 00:06:18.385 "bdev_reset_iostat", 00:06:18.385 "bdev_get_iostat", 00:06:18.385 "bdev_examine", 00:06:18.385 "bdev_wait_for_examine", 00:06:18.385 "bdev_set_options", 00:06:18.385 "accel_get_stats", 00:06:18.385 "accel_set_options", 00:06:18.385 "accel_set_driver", 00:06:18.385 "accel_crypto_key_destroy", 00:06:18.385 "accel_crypto_keys_get", 00:06:18.385 "accel_crypto_key_create", 00:06:18.385 "accel_assign_opc", 00:06:18.385 "accel_get_module_info", 00:06:18.385 "accel_get_opc_assignments", 00:06:18.385 "vmd_rescan", 00:06:18.385 "vmd_remove_device", 00:06:18.386 "vmd_enable", 00:06:18.386 "sock_get_default_impl", 00:06:18.386 "sock_set_default_impl", 00:06:18.386 "sock_impl_set_options", 00:06:18.386 "sock_impl_get_options", 00:06:18.386 "iobuf_get_stats", 00:06:18.386 "iobuf_set_options", 00:06:18.386 "keyring_get_keys", 00:06:18.386 "framework_get_pci_devices", 00:06:18.386 "framework_get_config", 00:06:18.386 "framework_get_subsystems", 00:06:18.386 "fsdev_set_opts", 00:06:18.386 "fsdev_get_opts", 00:06:18.386 "trace_get_info", 00:06:18.386 "trace_get_tpoint_group_mask", 00:06:18.386 "trace_disable_tpoint_group", 00:06:18.386 "trace_enable_tpoint_group", 00:06:18.386 "trace_clear_tpoint_mask", 00:06:18.386 "trace_set_tpoint_mask", 00:06:18.386 "notify_get_notifications", 00:06:18.386 "notify_get_types", 00:06:18.386 "spdk_get_version", 00:06:18.386 "rpc_get_methods" 00:06:18.386 ] 00:06:18.386 00:22:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.386 00:22:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.386 00:22:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69997 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69997 ']' 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69997 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69997 00:06:18.386 killing process with pid 69997 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69997' 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69997 00:06:18.386 00:22:04 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69997 00:06:18.645 ************************************ 00:06:18.645 END TEST spdkcli_tcp 00:06:18.645 ************************************ 00:06:18.645 00:06:18.645 real 0m1.229s 00:06:18.645 user 0m2.129s 00:06:18.645 sys 0m0.374s 00:06:18.645 00:22:04 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.645 00:22:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.645 00:22:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.645 00:22:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.645 00:22:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.645 00:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:18.645 ************************************ 00:06:18.645 START TEST dpdk_mem_utility 00:06:18.645 ************************************ 00:06:18.645 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.645 * Looking for test storage... 00:06:18.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:18.645 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.645 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.645 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.905 00:22:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.905 --rc genhtml_branch_coverage=1 00:06:18.905 --rc genhtml_function_coverage=1 00:06:18.905 --rc genhtml_legend=1 00:06:18.905 --rc geninfo_all_blocks=1 00:06:18.905 --rc geninfo_unexecuted_blocks=1 00:06:18.905 00:06:18.905 ' 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.905 --rc genhtml_branch_coverage=1 00:06:18.905 --rc genhtml_function_coverage=1 00:06:18.905 --rc genhtml_legend=1 00:06:18.905 --rc geninfo_all_blocks=1 00:06:18.905 --rc geninfo_unexecuted_blocks=1 00:06:18.905 00:06:18.905 ' 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.905 --rc genhtml_branch_coverage=1 00:06:18.905 --rc genhtml_function_coverage=1 00:06:18.905 --rc genhtml_legend=1 00:06:18.905 --rc geninfo_all_blocks=1 00:06:18.905 --rc geninfo_unexecuted_blocks=1 00:06:18.905 00:06:18.905 ' 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.905 --rc genhtml_branch_coverage=1 00:06:18.905 --rc genhtml_function_coverage=1 00:06:18.905 --rc genhtml_legend=1 00:06:18.905 --rc geninfo_all_blocks=1 00:06:18.905 --rc geninfo_unexecuted_blocks=1 00:06:18.905 00:06:18.905 ' 00:06:18.905 00:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.905 00:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70089 00:06:18.905 00:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.905 00:22:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70089 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70089 ']' 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.905 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.906 00:22:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.906 [2024-12-17 00:22:04.740072] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:18.906 [2024-12-17 00:22:04.740384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70089 ] 00:06:18.906 [2024-12-17 00:22:04.874827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.165 [2024-12-17 00:22:04.914676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.165 [2024-12-17 00:22:04.952461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.165 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.165 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:19.165 00:22:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.165 00:22:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.165 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.165 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.165 { 00:06:19.165 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.165 } 00:06:19.165 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.165 00:22:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.165 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:19.165 1 heaps totaling size 860.000000 MiB 00:06:19.165 size: 860.000000 MiB heap id: 0 00:06:19.165 end heaps---------- 00:06:19.165 9 mempools totaling size 642.649841 MiB 00:06:19.165 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.165 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.165 size: 92.545471 MiB name: bdev_io_70089 00:06:19.165 size: 51.011292 MiB name: evtpool_70089 00:06:19.165 size: 50.003479 MiB name: msgpool_70089 00:06:19.165 size: 36.509338 MiB name: fsdev_io_70089 00:06:19.165 size: 21.763794 MiB name: PDU_Pool 00:06:19.165 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.165 size: 0.026123 MiB name: Session_Pool 00:06:19.165 end mempools------- 00:06:19.165 6 memzones totaling size 4.142822 MiB 00:06:19.165 size: 1.000366 MiB name: RG_ring_0_70089 00:06:19.165 size: 1.000366 MiB name: RG_ring_1_70089 00:06:19.165 size: 1.000366 MiB name: RG_ring_4_70089 00:06:19.165 size: 1.000366 MiB name: RG_ring_5_70089 00:06:19.165 size: 0.125366 MiB name: RG_ring_2_70089 00:06:19.165 size: 0.015991 MiB name: RG_ring_3_70089 00:06:19.165 end memzones------- 00:06:19.165 00:22:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:19.425 heap id: 0 total size: 860.000000 MiB number of busy elements: 321 number of free elements: 16 00:06:19.425 list of free elements. size: 13.933960 MiB 00:06:19.425 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:19.425 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:19.425 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:19.425 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:19.425 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:19.425 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:19.425 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:19.425 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:19.425 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:19.425 element at address: 0x20001d800000 with size: 0.567322 MiB 00:06:19.425 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:19.425 element at address: 0x200003e00000 with size: 0.487183 MiB 00:06:19.426 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:19.426 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:19.426 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:19.426 element at address: 0x200003a00000 with size: 0.352112 MiB 00:06:19.426 list of standard malloc elements. size: 199.269348 MiB 00:06:19.426 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:19.426 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:19.426 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:19.426 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:19.426 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:19.426 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:19.426 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:19.426 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:19.426 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:19.426 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:19.426 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:19.426 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:19.426 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:19.427 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:19.428 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:19.428 list of memzone associated elements. size: 646.796692 MiB 00:06:19.428 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:19.428 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:19.428 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:19.428 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:19.428 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:19.428 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70089_0 00:06:19.428 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:19.428 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70089_0 00:06:19.428 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:19.428 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70089_0 00:06:19.428 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:19.428 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70089_0 00:06:19.428 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:19.428 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:19.428 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:19.428 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:19.428 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:19.428 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70089 00:06:19.428 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:19.428 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70089 00:06:19.428 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:19.428 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70089 00:06:19.428 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:19.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:19.428 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:19.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:19.428 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:19.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:19.428 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:19.428 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:19.428 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:19.428 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70089 00:06:19.428 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:19.428 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70089 00:06:19.428 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:19.428 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70089 00:06:19.428 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:19.428 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70089 00:06:19.428 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:19.428 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70089 00:06:19.428 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:19.428 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70089 00:06:19.428 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:19.428 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:19.428 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:19.428 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:19.428 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:19.428 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:19.428 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:06:19.428 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70089 00:06:19.428 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:19.428 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:19.428 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:19.428 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:19.428 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:06:19.428 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70089 00:06:19.428 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:19.428 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:19.428 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:19.428 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70089 00:06:19.428 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:19.428 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70089 00:06:19.428 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:06:19.428 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70089 00:06:19.428 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:19.428 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:19.428 00:22:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:19.428 00:22:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70089 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70089 ']' 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70089 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70089 00:06:19.428 killing process with pid 70089 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70089' 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70089 00:06:19.428 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70089 00:06:19.687 ************************************ 00:06:19.687 END TEST dpdk_mem_utility 00:06:19.687 ************************************ 00:06:19.687 00:06:19.687 real 0m1.067s 00:06:19.687 user 0m1.129s 00:06:19.687 sys 0m0.333s 00:06:19.687 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.687 00:22:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.687 00:22:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:19.687 00:22:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.687 00:22:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.687 00:22:05 -- common/autotest_common.sh@10 -- # set +x 00:06:19.687 ************************************ 00:06:19.687 START TEST event 00:06:19.687 ************************************ 00:06:19.687 00:22:05 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:19.687 * Looking for test storage... 00:06:19.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:19.687 00:22:05 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.687 00:22:05 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.687 00:22:05 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.947 00:22:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.947 00:22:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.947 00:22:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.947 00:22:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.947 00:22:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.947 00:22:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.947 00:22:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.947 00:22:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.947 00:22:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.947 00:22:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.947 00:22:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.947 00:22:05 event -- scripts/common.sh@344 -- # case "$op" in 00:06:19.947 00:22:05 event -- scripts/common.sh@345 -- # : 1 00:06:19.947 00:22:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.947 00:22:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.947 00:22:05 event -- scripts/common.sh@365 -- # decimal 1 00:06:19.947 00:22:05 event -- scripts/common.sh@353 -- # local d=1 00:06:19.947 00:22:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.947 00:22:05 event -- scripts/common.sh@355 -- # echo 1 00:06:19.947 00:22:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.947 00:22:05 event -- scripts/common.sh@366 -- # decimal 2 00:06:19.947 00:22:05 event -- scripts/common.sh@353 -- # local d=2 00:06:19.947 00:22:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.947 00:22:05 event -- scripts/common.sh@355 -- # echo 2 00:06:19.947 00:22:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.947 00:22:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.947 00:22:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.947 00:22:05 event -- scripts/common.sh@368 -- # return 0 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.947 --rc genhtml_branch_coverage=1 00:06:19.947 --rc genhtml_function_coverage=1 00:06:19.947 --rc genhtml_legend=1 00:06:19.947 --rc geninfo_all_blocks=1 00:06:19.947 --rc geninfo_unexecuted_blocks=1 00:06:19.947 00:06:19.947 ' 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.947 --rc genhtml_branch_coverage=1 00:06:19.947 --rc genhtml_function_coverage=1 00:06:19.947 --rc genhtml_legend=1 00:06:19.947 --rc geninfo_all_blocks=1 00:06:19.947 --rc geninfo_unexecuted_blocks=1 00:06:19.947 00:06:19.947 ' 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.947 --rc genhtml_branch_coverage=1 00:06:19.947 --rc genhtml_function_coverage=1 00:06:19.947 --rc genhtml_legend=1 00:06:19.947 --rc geninfo_all_blocks=1 00:06:19.947 --rc geninfo_unexecuted_blocks=1 00:06:19.947 00:06:19.947 ' 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.947 --rc genhtml_branch_coverage=1 00:06:19.947 --rc genhtml_function_coverage=1 00:06:19.947 --rc genhtml_legend=1 00:06:19.947 --rc geninfo_all_blocks=1 00:06:19.947 --rc geninfo_unexecuted_blocks=1 00:06:19.947 00:06:19.947 ' 00:06:19.947 00:22:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:19.947 00:22:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:19.947 00:22:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:19.947 00:22:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.947 00:22:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.947 ************************************ 00:06:19.947 START TEST event_perf 00:06:19.947 ************************************ 00:06:19.947 00:22:05 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.947 Running I/O for 1 seconds...[2024-12-17 00:22:05.802694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:19.947 [2024-12-17 00:22:05.802921] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70161 ] 00:06:19.947 [2024-12-17 00:22:05.939015] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.206 [2024-12-17 00:22:05.979996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.206 [2024-12-17 00:22:05.980127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.206 Running I/O for 1 seconds...[2024-12-17 00:22:05.980327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.206 [2024-12-17 00:22:05.980337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.144 00:06:21.144 lcore 0: 186820 00:06:21.144 lcore 1: 186818 00:06:21.144 lcore 2: 186819 00:06:21.144 lcore 3: 186818 00:06:21.144 done. 00:06:21.144 00:06:21.144 real 0m1.254s 00:06:21.144 user 0m4.083s 00:06:21.144 sys 0m0.048s 00:06:21.144 00:22:07 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.144 ************************************ 00:06:21.144 END TEST event_perf 00:06:21.144 ************************************ 00:06:21.144 00:22:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.144 00:22:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.144 00:22:07 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:21.144 00:22:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.144 00:22:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.144 ************************************ 00:06:21.144 START TEST event_reactor 00:06:21.144 ************************************ 00:06:21.144 00:22:07 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.144 [2024-12-17 00:22:07.114193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:21.144 [2024-12-17 00:22:07.114296] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70194 ] 00:06:21.444 [2024-12-17 00:22:07.256995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.444 [2024-12-17 00:22:07.289781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.386 test_start 00:06:22.386 oneshot 00:06:22.386 tick 100 00:06:22.386 tick 100 00:06:22.386 tick 250 00:06:22.386 tick 100 00:06:22.386 tick 100 00:06:22.386 tick 100 00:06:22.386 tick 250 00:06:22.386 tick 500 00:06:22.386 tick 100 00:06:22.386 tick 100 00:06:22.386 tick 250 00:06:22.386 tick 100 00:06:22.386 tick 100 00:06:22.386 test_end 00:06:22.386 ************************************ 00:06:22.386 END TEST event_reactor 00:06:22.386 ************************************ 00:06:22.386 00:06:22.386 real 0m1.247s 00:06:22.386 user 0m1.100s 00:06:22.386 sys 0m0.041s 00:06:22.386 00:22:08 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.386 00:22:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:22.386 00:22:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.386 00:22:08 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:22.386 00:22:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.386 00:22:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.645 ************************************ 00:06:22.645 START TEST event_reactor_perf 00:06:22.645 ************************************ 00:06:22.645 00:22:08 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:22.645 [2024-12-17 00:22:08.403952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:22.645 [2024-12-17 00:22:08.404042] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70229 ] 00:06:22.645 [2024-12-17 00:22:08.541884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.645 [2024-12-17 00:22:08.588759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.023 test_start 00:06:24.023 test_end 00:06:24.023 Performance: 412296 events per second 00:06:24.023 00:06:24.023 real 0m1.279s 00:06:24.023 user 0m1.124s 00:06:24.023 sys 0m0.049s 00:06:24.023 00:22:09 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.023 00:22:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.023 ************************************ 00:06:24.023 END TEST event_reactor_perf 00:06:24.023 ************************************ 00:06:24.023 00:22:09 event -- event/event.sh@49 -- # uname -s 00:06:24.023 00:22:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.023 00:22:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.023 00:22:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.023 00:22:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.023 00:22:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.023 ************************************ 00:06:24.023 START TEST event_scheduler 00:06:24.023 ************************************ 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.023 * Looking for test storage... 00:06:24.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.023 00:22:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.023 --rc genhtml_branch_coverage=1 00:06:24.023 --rc genhtml_function_coverage=1 00:06:24.023 --rc genhtml_legend=1 00:06:24.023 --rc geninfo_all_blocks=1 00:06:24.023 --rc geninfo_unexecuted_blocks=1 00:06:24.023 00:06:24.023 ' 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.023 --rc genhtml_branch_coverage=1 00:06:24.023 --rc genhtml_function_coverage=1 00:06:24.023 --rc genhtml_legend=1 00:06:24.023 --rc geninfo_all_blocks=1 00:06:24.023 --rc geninfo_unexecuted_blocks=1 00:06:24.023 00:06:24.023 ' 00:06:24.023 00:22:09 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.023 --rc genhtml_branch_coverage=1 00:06:24.023 --rc genhtml_function_coverage=1 00:06:24.023 --rc genhtml_legend=1 00:06:24.023 --rc geninfo_all_blocks=1 00:06:24.023 --rc geninfo_unexecuted_blocks=1 00:06:24.023 00:06:24.023 ' 00:06:24.024 00:22:09 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.024 --rc genhtml_branch_coverage=1 00:06:24.024 --rc genhtml_function_coverage=1 00:06:24.024 --rc genhtml_legend=1 00:06:24.024 --rc geninfo_all_blocks=1 00:06:24.024 --rc geninfo_unexecuted_blocks=1 00:06:24.024 00:06:24.024 ' 00:06:24.024 00:22:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.024 00:22:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70299 00:06:24.024 00:22:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.024 00:22:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70299 00:06:24.024 00:22:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.024 00:22:09 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70299 ']' 00:06:24.024 00:22:09 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.024 00:22:09 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.024 00:22:09 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.024 00:22:09 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.024 00:22:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.024 [2024-12-17 00:22:09.964801] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:24.024 [2024-12-17 00:22:09.965093] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:06:24.283 [2024-12-17 00:22:10.103208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.283 [2024-12-17 00:22:10.138231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.283 [2024-12-17 00:22:10.138368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.283 [2024-12-17 00:22:10.139264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.283 [2024-12-17 00:22:10.139203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:24.283 00:22:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.283 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:24.283 POWER: Cannot set governor of lcore 0 to userspace 00:06:24.283 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:24.283 POWER: Cannot set governor of lcore 0 to performance 00:06:24.283 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:24.283 POWER: Cannot set governor of lcore 0 to userspace 00:06:24.283 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:24.283 POWER: Unable to set Power Management Environment for lcore 0 00:06:24.283 [2024-12-17 00:22:10.224720] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:24.283 [2024-12-17 00:22:10.224822] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:24.283 [2024-12-17 00:22:10.224861] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:24.283 [2024-12-17 00:22:10.224878] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:24.283 [2024-12-17 00:22:10.224886] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:24.283 [2024-12-17 00:22:10.224893] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.283 00:22:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.283 [2024-12-17 00:22:10.260323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.283 [2024-12-17 00:22:10.274969] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.283 00:22:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.283 00:22:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 ************************************ 00:06:24.542 START TEST scheduler_create_thread 00:06:24.542 ************************************ 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 2 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 3 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 4 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 5 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 6 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 7 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.542 8 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:24.542 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.543 9 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.543 10 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.543 00:22:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.479 00:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.479 00:22:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:25.479 00:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.479 00:22:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.856 00:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.856 00:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:26.856 00:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:26.856 00:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.856 00:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.793 ************************************ 00:06:27.793 END TEST scheduler_create_thread 00:06:27.793 ************************************ 00:06:27.793 00:22:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.793 00:06:27.793 real 0m3.374s 00:06:27.793 user 0m0.020s 00:06:27.793 sys 0m0.006s 00:06:27.793 00:22:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.793 00:22:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.793 00:22:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:27.793 00:22:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70299 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70299 ']' 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70299 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70299 00:06:27.793 killing process with pid 70299 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70299' 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70299 00:06:27.793 00:22:13 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70299 00:06:28.052 [2024-12-17 00:22:14.042352] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.311 00:06:28.311 real 0m4.502s 00:06:28.311 user 0m7.803s 00:06:28.311 sys 0m0.311s 00:06:28.311 00:22:14 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.311 00:22:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.311 ************************************ 00:06:28.311 END TEST event_scheduler 00:06:28.311 ************************************ 00:06:28.311 00:22:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.311 00:22:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.311 00:22:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.311 00:22:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.311 00:22:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.311 ************************************ 00:06:28.311 START TEST app_repeat 00:06:28.311 ************************************ 00:06:28.311 00:22:14 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70391 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.311 Process app_repeat pid: 70391 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70391' 00:06:28.311 spdk_app_start Round 0 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.311 00:22:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70391 /var/tmp/spdk-nbd.sock 00:06:28.311 00:22:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70391 ']' 00:06:28.311 00:22:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.311 00:22:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.311 00:22:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.311 00:22:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.311 00:22:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.570 [2024-12-17 00:22:14.316085] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:28.570 [2024-12-17 00:22:14.316215] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70391 ] 00:06:28.570 [2024-12-17 00:22:14.452947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.570 [2024-12-17 00:22:14.489487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.570 [2024-12-17 00:22:14.489494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.570 [2024-12-17 00:22:14.516689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.829 00:22:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.829 00:22:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:28.829 00:22:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.088 Malloc0 00:06:29.088 00:22:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.347 Malloc1 00:06:29.347 00:22:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.347 00:22:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.606 /dev/nbd0 00:06:29.606 00:22:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.606 00:22:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.606 1+0 records in 00:06:29.606 1+0 records out 00:06:29.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463389 s, 8.8 MB/s 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.606 00:22:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:29.606 00:22:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.606 00:22:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.606 00:22:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.865 /dev/nbd1 00:06:29.865 00:22:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.865 00:22:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.865 1+0 records in 00:06:29.865 1+0 records out 00:06:29.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244488 s, 16.8 MB/s 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.865 00:22:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:29.865 00:22:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.865 00:22:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.865 00:22:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.865 00:22:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.865 00:22:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.125 00:22:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.125 { 00:06:30.125 "nbd_device": "/dev/nbd0", 00:06:30.125 "bdev_name": "Malloc0" 00:06:30.125 }, 00:06:30.125 { 00:06:30.125 "nbd_device": "/dev/nbd1", 00:06:30.125 "bdev_name": "Malloc1" 00:06:30.125 } 00:06:30.125 ]' 00:06:30.125 00:22:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.125 { 00:06:30.125 "nbd_device": "/dev/nbd0", 00:06:30.125 "bdev_name": "Malloc0" 00:06:30.125 }, 00:06:30.125 { 00:06:30.125 "nbd_device": "/dev/nbd1", 00:06:30.125 "bdev_name": "Malloc1" 00:06:30.125 } 00:06:30.125 ]' 00:06:30.125 00:22:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.384 /dev/nbd1' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.384 /dev/nbd1' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.384 256+0 records in 00:06:30.384 256+0 records out 00:06:30.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00941835 s, 111 MB/s 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.384 256+0 records in 00:06:30.384 256+0 records out 00:06:30.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225218 s, 46.6 MB/s 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.384 256+0 records in 00:06:30.384 256+0 records out 00:06:30.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252838 s, 41.5 MB/s 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.384 00:22:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.643 00:22:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.902 00:22:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.161 00:22:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.161 00:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.161 00:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.420 00:22:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.420 00:22:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.679 00:22:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.679 [2024-12-17 00:22:17.524611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.679 [2024-12-17 00:22:17.559670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.679 [2024-12-17 00:22:17.559681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.679 [2024-12-17 00:22:17.586499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.679 [2024-12-17 00:22:17.586592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.679 [2024-12-17 00:22:17.586604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.964 spdk_app_start Round 1 00:06:34.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.964 00:22:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.964 00:22:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.964 00:22:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70391 /var/tmp/spdk-nbd.sock 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70391 ']' 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.964 00:22:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:34.964 00:22:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.222 Malloc0 00:06:35.222 00:22:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.222 Malloc1 00:06:35.480 00:22:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.480 00:22:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.480 00:22:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.480 00:22:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.480 00:22:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.480 00:22:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.480 00:22:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.480 00:22:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.481 00:22:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.739 /dev/nbd0 00:06:35.739 00:22:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.739 00:22:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.739 00:22:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:35.739 00:22:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:35.739 00:22:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.739 00:22:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.740 1+0 records in 00:06:35.740 1+0 records out 00:06:35.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246727 s, 16.6 MB/s 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.740 00:22:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:35.740 00:22:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.740 00:22:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.740 00:22:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.998 /dev/nbd1 00:06:35.998 00:22:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.998 00:22:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.998 1+0 records in 00:06:35.998 1+0 records out 00:06:35.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211909 s, 19.3 MB/s 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.998 00:22:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:35.998 00:22:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.998 00:22:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.999 00:22:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.999 00:22:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.999 00:22:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.257 { 00:06:36.257 "nbd_device": "/dev/nbd0", 00:06:36.257 "bdev_name": "Malloc0" 00:06:36.257 }, 00:06:36.257 { 00:06:36.257 "nbd_device": "/dev/nbd1", 00:06:36.257 "bdev_name": "Malloc1" 00:06:36.257 } 00:06:36.257 ]' 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.257 { 00:06:36.257 "nbd_device": "/dev/nbd0", 00:06:36.257 "bdev_name": "Malloc0" 00:06:36.257 }, 00:06:36.257 { 00:06:36.257 "nbd_device": "/dev/nbd1", 00:06:36.257 "bdev_name": "Malloc1" 00:06:36.257 } 00:06:36.257 ]' 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.257 /dev/nbd1' 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.257 /dev/nbd1' 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.257 256+0 records in 00:06:36.257 256+0 records out 00:06:36.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0072496 s, 145 MB/s 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.257 256+0 records in 00:06:36.257 256+0 records out 00:06:36.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216915 s, 48.3 MB/s 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.257 00:22:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.257 256+0 records in 00:06:36.258 256+0 records out 00:06:36.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024096 s, 43.5 MB/s 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.258 00:22:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.567 00:22:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.844 00:22:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.102 00:22:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.102 00:22:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.102 00:22:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.361 00:22:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.361 00:22:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.620 00:22:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.620 [2024-12-17 00:22:23.559254] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.620 [2024-12-17 00:22:23.591604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.620 [2024-12-17 00:22:23.591613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.879 [2024-12-17 00:22:23.622045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.879 [2024-12-17 00:22:23.622132] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.879 [2024-12-17 00:22:23.622160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.165 spdk_app_start Round 2 00:06:41.165 00:22:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.165 00:22:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:41.165 00:22:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70391 /var/tmp/spdk-nbd.sock 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70391 ']' 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.165 00:22:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:41.165 00:22:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.165 Malloc0 00:06:41.165 00:22:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.425 Malloc1 00:06:41.425 00:22:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.425 00:22:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.684 /dev/nbd0 00:06:41.684 00:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.684 00:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.684 1+0 records in 00:06:41.684 1+0 records out 00:06:41.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323381 s, 12.7 MB/s 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.684 00:22:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.684 00:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.684 00:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.684 00:22:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.943 /dev/nbd1 00:06:41.943 00:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.943 00:22:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.943 1+0 records in 00:06:41.943 1+0 records out 00:06:41.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207086 s, 19.8 MB/s 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.943 00:22:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.943 00:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.943 00:22:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.943 00:22:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.943 00:22:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.943 00:22:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.203 { 00:06:42.203 "nbd_device": "/dev/nbd0", 00:06:42.203 "bdev_name": "Malloc0" 00:06:42.203 }, 00:06:42.203 { 00:06:42.203 "nbd_device": "/dev/nbd1", 00:06:42.203 "bdev_name": "Malloc1" 00:06:42.203 } 00:06:42.203 ]' 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.203 { 00:06:42.203 "nbd_device": "/dev/nbd0", 00:06:42.203 "bdev_name": "Malloc0" 00:06:42.203 }, 00:06:42.203 { 00:06:42.203 "nbd_device": "/dev/nbd1", 00:06:42.203 "bdev_name": "Malloc1" 00:06:42.203 } 00:06:42.203 ]' 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.203 /dev/nbd1' 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.203 /dev/nbd1' 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.203 00:22:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.462 256+0 records in 00:06:42.462 256+0 records out 00:06:42.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00890476 s, 118 MB/s 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.462 256+0 records in 00:06:42.462 256+0 records out 00:06:42.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229652 s, 45.7 MB/s 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.462 256+0 records in 00:06:42.462 256+0 records out 00:06:42.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230591 s, 45.5 MB/s 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.462 00:22:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.721 00:22:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.980 00:22:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.239 00:22:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.239 00:22:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.498 00:22:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.757 [2024-12-17 00:22:29.521558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.757 [2024-12-17 00:22:29.552620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.757 [2024-12-17 00:22:29.552632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.758 [2024-12-17 00:22:29.584208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.758 [2024-12-17 00:22:29.584383] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.758 [2024-12-17 00:22:29.584409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:47.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.045 00:22:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70391 /var/tmp/spdk-nbd.sock 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70391 ']' 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:47.045 00:22:32 event.app_repeat -- event/event.sh@39 -- # killprocess 70391 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70391 ']' 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70391 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70391 00:06:47.045 killing process with pid 70391 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70391' 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70391 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70391 00:06:47.045 spdk_app_start is called in Round 0. 00:06:47.045 Shutdown signal received, stop current app iteration 00:06:47.045 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:47.045 spdk_app_start is called in Round 1. 00:06:47.045 Shutdown signal received, stop current app iteration 00:06:47.045 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:47.045 spdk_app_start is called in Round 2. 00:06:47.045 Shutdown signal received, stop current app iteration 00:06:47.045 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:47.045 spdk_app_start is called in Round 3. 00:06:47.045 Shutdown signal received, stop current app iteration 00:06:47.045 ************************************ 00:06:47.045 END TEST app_repeat 00:06:47.045 ************************************ 00:06:47.045 00:22:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:47.045 00:22:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:47.045 00:06:47.045 real 0m18.585s 00:06:47.045 user 0m42.658s 00:06:47.045 sys 0m2.572s 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.045 00:22:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.045 00:22:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:47.045 00:22:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.045 00:22:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.045 00:22:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.045 00:22:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.046 ************************************ 00:06:47.046 START TEST cpu_locks 00:06:47.046 ************************************ 00:06:47.046 00:22:32 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.046 * Looking for test storage... 00:06:47.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:47.046 00:22:32 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.046 00:22:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.046 00:22:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.305 00:22:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.305 --rc genhtml_branch_coverage=1 00:06:47.305 --rc genhtml_function_coverage=1 00:06:47.305 --rc genhtml_legend=1 00:06:47.305 --rc geninfo_all_blocks=1 00:06:47.305 --rc geninfo_unexecuted_blocks=1 00:06:47.305 00:06:47.305 ' 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.305 --rc genhtml_branch_coverage=1 00:06:47.305 --rc genhtml_function_coverage=1 00:06:47.305 --rc genhtml_legend=1 00:06:47.305 --rc geninfo_all_blocks=1 00:06:47.305 --rc geninfo_unexecuted_blocks=1 00:06:47.305 00:06:47.305 ' 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.305 --rc genhtml_branch_coverage=1 00:06:47.305 --rc genhtml_function_coverage=1 00:06:47.305 --rc genhtml_legend=1 00:06:47.305 --rc geninfo_all_blocks=1 00:06:47.305 --rc geninfo_unexecuted_blocks=1 00:06:47.305 00:06:47.305 ' 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.305 --rc genhtml_branch_coverage=1 00:06:47.305 --rc genhtml_function_coverage=1 00:06:47.305 --rc genhtml_legend=1 00:06:47.305 --rc geninfo_all_blocks=1 00:06:47.305 --rc geninfo_unexecuted_blocks=1 00:06:47.305 00:06:47.305 ' 00:06:47.305 00:22:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:47.305 00:22:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:47.305 00:22:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:47.305 00:22:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.305 00:22:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.305 ************************************ 00:06:47.305 START TEST default_locks 00:06:47.305 ************************************ 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70826 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70826 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70826 ']' 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.305 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.305 [2024-12-17 00:22:33.174597] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:47.305 [2024-12-17 00:22:33.174858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70826 ] 00:06:47.305 [2024-12-17 00:22:33.304208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.564 [2024-12-17 00:22:33.336907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.564 [2024-12-17 00:22:33.369628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.564 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.564 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:47.564 00:22:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70826 00:06:47.564 00:22:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70826 00:06:47.564 00:22:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.822 00:22:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70826 00:06:47.823 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70826 ']' 00:06:47.823 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70826 00:06:47.823 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:47.823 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.823 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70826 00:06:48.081 killing process with pid 70826 00:06:48.081 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.081 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.081 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70826' 00:06:48.081 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70826 00:06:48.081 00:22:33 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70826 00:06:48.081 00:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70826 00:06:48.081 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:48.081 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70826 00:06:48.081 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.081 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.081 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70826 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70826 ']' 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.082 ERROR: process (pid: 70826) is no longer running 00:06:48.082 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70826) - No such process 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.082 00:06:48.082 real 0m0.958s 00:06:48.082 user 0m1.046s 00:06:48.082 sys 0m0.364s 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.082 ************************************ 00:06:48.082 END TEST default_locks 00:06:48.082 ************************************ 00:06:48.082 00:22:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.341 00:22:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.341 00:22:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.341 00:22:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.341 00:22:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.341 ************************************ 00:06:48.341 START TEST default_locks_via_rpc 00:06:48.341 ************************************ 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70865 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70865 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70865 ']' 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.341 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.341 [2024-12-17 00:22:34.178440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:48.341 [2024-12-17 00:22:34.178527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70865 ] 00:06:48.341 [2024-12-17 00:22:34.309688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.600 [2024-12-17 00:22:34.346690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.600 [2024-12-17 00:22:34.382466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70865 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70865 00:06:48.600 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.168 00:22:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70865 00:06:49.168 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70865 ']' 00:06:49.168 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70865 00:06:49.168 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:49.168 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.168 00:22:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70865 00:06:49.168 killing process with pid 70865 00:06:49.168 00:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.168 00:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.168 00:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70865' 00:06:49.168 00:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70865 00:06:49.168 00:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70865 00:06:49.427 00:06:49.427 real 0m1.113s 00:06:49.427 user 0m1.215s 00:06:49.427 sys 0m0.443s 00:06:49.427 00:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.427 ************************************ 00:06:49.427 END TEST default_locks_via_rpc 00:06:49.427 00:22:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.427 ************************************ 00:06:49.427 00:22:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.427 00:22:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.427 00:22:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.427 00:22:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.427 ************************************ 00:06:49.427 START TEST non_locking_app_on_locked_coremask 00:06:49.427 ************************************ 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70910 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70910 /var/tmp/spdk.sock 00:06:49.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70910 ']' 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.427 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.427 [2024-12-17 00:22:35.353756] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.427 [2024-12-17 00:22:35.353866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70910 ] 00:06:49.686 [2024-12-17 00:22:35.490971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.686 [2024-12-17 00:22:35.522725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.686 [2024-12-17 00:22:35.556000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70918 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70918 /var/tmp/spdk2.sock 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70918 ']' 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.686 00:22:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.945 [2024-12-17 00:22:35.748089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.945 [2024-12-17 00:22:35.748422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70918 ] 00:06:49.946 [2024-12-17 00:22:35.887689] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.946 [2024-12-17 00:22:35.887739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.205 [2024-12-17 00:22:35.957396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.205 [2024-12-17 00:22:36.021989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.773 00:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.773 00:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.773 00:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70910 00:06:50.773 00:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70910 00:06:50.773 00:22:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70910 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70910 ']' 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70910 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70910 00:06:51.710 killing process with pid 70910 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70910' 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70910 00:06:51.710 00:22:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70910 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70918 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70918 ']' 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70918 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70918 00:06:52.278 killing process with pid 70918 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70918' 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70918 00:06:52.278 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70918 00:06:52.537 ************************************ 00:06:52.538 END TEST non_locking_app_on_locked_coremask 00:06:52.538 ************************************ 00:06:52.538 00:06:52.538 real 0m3.068s 00:06:52.538 user 0m3.663s 00:06:52.538 sys 0m0.900s 00:06:52.538 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.538 00:22:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.538 00:22:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.538 00:22:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.538 00:22:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.538 00:22:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.538 ************************************ 00:06:52.538 START TEST locking_app_on_unlocked_coremask 00:06:52.538 ************************************ 00:06:52.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70980 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70980 /var/tmp/spdk.sock 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70980 ']' 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.538 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.538 [2024-12-17 00:22:38.455151] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:52.538 [2024-12-17 00:22:38.455238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:06:52.797 [2024-12-17 00:22:38.583054] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.797 [2024-12-17 00:22:38.583091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.797 [2024-12-17 00:22:38.615023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.797 [2024-12-17 00:22:38.648039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70988 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70988 /var/tmp/spdk2.sock 00:06:52.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70988 ']' 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.797 00:22:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.056 [2024-12-17 00:22:38.817226] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:53.056 [2024-12-17 00:22:38.817568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70988 ] 00:06:53.056 [2024-12-17 00:22:38.955251] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.056 [2024-12-17 00:22:39.021284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.315 [2024-12-17 00:22:39.087551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.315 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.315 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.315 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70988 00:06:53.315 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70988 00:06:53.315 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70980 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70980 ']' 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70980 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70980 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.252 killing process with pid 70980 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70980' 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70980 00:06:54.252 00:22:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70980 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70988 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70988 ']' 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70988 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70988 00:06:54.512 killing process with pid 70988 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70988' 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70988 00:06:54.512 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70988 00:06:54.771 ************************************ 00:06:54.771 END TEST locking_app_on_unlocked_coremask 00:06:54.771 ************************************ 00:06:54.771 00:06:54.771 real 0m2.268s 00:06:54.771 user 0m2.598s 00:06:54.771 sys 0m0.736s 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.771 00:22:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:54.771 00:22:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.771 00:22:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.771 00:22:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.771 ************************************ 00:06:54.771 START TEST locking_app_on_locked_coremask 00:06:54.771 ************************************ 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71042 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71042 /var/tmp/spdk.sock 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71042 ']' 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.771 00:22:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.031 [2024-12-17 00:22:40.783089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.031 [2024-12-17 00:22:40.783381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71042 ] 00:06:55.031 [2024-12-17 00:22:40.916772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.031 [2024-12-17 00:22:40.948382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.031 [2024-12-17 00:22:40.981377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.290 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.290 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:55.290 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71045 00:06:55.290 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71045 /var/tmp/spdk2.sock 00:06:55.290 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.290 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71045 /var/tmp/spdk2.sock 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:55.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71045 /var/tmp/spdk2.sock 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71045 ']' 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.291 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.291 [2024-12-17 00:22:41.145400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.291 [2024-12-17 00:22:41.145664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71045 ] 00:06:55.291 [2024-12-17 00:22:41.283462] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71042 has claimed it. 00:06:55.291 [2024-12-17 00:22:41.283538] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.227 ERROR: process (pid: 71045) is no longer running 00:06:56.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71045) - No such process 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71042 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71042 00:06:56.227 00:22:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71042 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71042 ']' 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71042 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71042 00:06:56.486 killing process with pid 71042 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71042' 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71042 00:06:56.486 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71042 00:06:56.746 00:06:56.746 real 0m1.863s 00:06:56.746 user 0m2.245s 00:06:56.746 sys 0m0.497s 00:06:56.746 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.746 ************************************ 00:06:56.746 END TEST locking_app_on_locked_coremask 00:06:56.746 ************************************ 00:06:56.746 00:22:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.746 00:22:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:56.746 00:22:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.746 00:22:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.746 00:22:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.746 ************************************ 00:06:56.746 START TEST locking_overlapped_coremask 00:06:56.746 ************************************ 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71095 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71095 /var/tmp/spdk.sock 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71095 ']' 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.746 00:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.746 [2024-12-17 00:22:42.696078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:56.746 [2024-12-17 00:22:42.696164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71095 ] 00:06:57.005 [2024-12-17 00:22:42.824828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.005 [2024-12-17 00:22:42.861046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.005 [2024-12-17 00:22:42.861142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.005 [2024-12-17 00:22:42.861144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.005 [2024-12-17 00:22:42.896389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71101 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71101 /var/tmp/spdk2.sock 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71101 /var/tmp/spdk2.sock 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71101 /var/tmp/spdk2.sock 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71101 ']' 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.264 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.264 [2024-12-17 00:22:43.087549] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:57.264 [2024-12-17 00:22:43.087657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71101 ] 00:06:57.264 [2024-12-17 00:22:43.231993] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71095 has claimed it. 00:06:57.264 [2024-12-17 00:22:43.232063] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.832 ERROR: process (pid: 71101) is no longer running 00:06:57.832 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71101) - No such process 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71095 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71095 ']' 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71095 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71095 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.832 killing process with pid 71095 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.832 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71095' 00:06:58.091 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71095 00:06:58.091 00:22:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71095 00:06:58.091 00:06:58.091 real 0m1.424s 00:06:58.091 user 0m3.992s 00:06:58.091 sys 0m0.295s 00:06:58.091 00:22:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.091 00:22:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.091 ************************************ 00:06:58.091 END TEST locking_overlapped_coremask 00:06:58.091 ************************************ 00:06:58.351 00:22:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:58.351 00:22:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.351 00:22:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.351 00:22:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.351 ************************************ 00:06:58.351 START TEST locking_overlapped_coremask_via_rpc 00:06:58.351 ************************************ 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71141 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71141 /var/tmp/spdk.sock 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71141 ']' 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.351 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.351 [2024-12-17 00:22:44.178684] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.351 [2024-12-17 00:22:44.178792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71141 ] 00:06:58.351 [2024-12-17 00:22:44.309965] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.351 [2024-12-17 00:22:44.310013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.351 [2024-12-17 00:22:44.343252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.351 [2024-12-17 00:22:44.343387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.351 [2024-12-17 00:22:44.343391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.624 [2024-12-17 00:22:44.378979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71152 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71152 /var/tmp/spdk2.sock 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71152 ']' 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.624 00:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.624 [2024-12-17 00:22:44.566742] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.624 [2024-12-17 00:22:44.566850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71152 ] 00:06:58.895 [2024-12-17 00:22:44.711224] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.895 [2024-12-17 00:22:44.711281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.895 [2024-12-17 00:22:44.786821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.895 [2024-12-17 00:22:44.786914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.895 [2024-12-17 00:22:44.786916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.895 [2024-12-17 00:22:44.862783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.497 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.497 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.756 [2024-12-17 00:22:45.513470] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71141 has claimed it. 00:06:59.756 request: 00:06:59.756 { 00:06:59.756 "method": "framework_enable_cpumask_locks", 00:06:59.756 "req_id": 1 00:06:59.756 } 00:06:59.756 Got JSON-RPC error response 00:06:59.756 response: 00:06:59.756 { 00:06:59.756 "code": -32603, 00:06:59.756 "message": "Failed to claim CPU core: 2" 00:06:59.756 } 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71141 /var/tmp/spdk.sock 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71141 ']' 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.756 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71152 /var/tmp/spdk2.sock 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71152 ']' 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.014 00:22:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.273 ************************************ 00:07:00.273 END TEST locking_overlapped_coremask_via_rpc 00:07:00.273 ************************************ 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.273 00:07:00.273 real 0m1.957s 00:07:00.273 user 0m1.182s 00:07:00.273 sys 0m0.143s 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.273 00:22:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.273 00:22:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:00.273 00:22:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71141 ]] 00:07:00.273 00:22:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71141 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71141 ']' 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71141 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71141 00:07:00.273 killing process with pid 71141 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71141' 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71141 00:07:00.273 00:22:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71141 00:07:00.532 00:22:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71152 ]] 00:07:00.532 00:22:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71152 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71152 ']' 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71152 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71152 00:07:00.532 killing process with pid 71152 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71152' 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71152 00:07:00.532 00:22:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71152 00:07:00.792 00:22:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.792 Process with pid 71141 is not found 00:07:00.792 Process with pid 71152 is not found 00:07:00.792 00:22:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.792 00:22:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71141 ]] 00:07:00.792 00:22:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71141 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71141 ']' 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71141 00:07:00.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71141) - No such process 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71141 is not found' 00:07:00.792 00:22:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71152 ]] 00:07:00.792 00:22:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71152 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71152 ']' 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71152 00:07:00.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71152) - No such process 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71152 is not found' 00:07:00.792 00:22:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.792 ************************************ 00:07:00.792 END TEST cpu_locks 00:07:00.792 ************************************ 00:07:00.792 00:07:00.792 real 0m13.763s 00:07:00.792 user 0m25.946s 00:07:00.792 sys 0m4.064s 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.792 00:22:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.792 ************************************ 00:07:00.792 END TEST event 00:07:00.792 ************************************ 00:07:00.792 00:07:00.792 real 0m41.119s 00:07:00.792 user 1m22.918s 00:07:00.792 sys 0m7.343s 00:07:00.792 00:22:46 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.792 00:22:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.792 00:22:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:00.792 00:22:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.792 00:22:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.792 00:22:46 -- common/autotest_common.sh@10 -- # set +x 00:07:00.792 ************************************ 00:07:00.792 START TEST thread 00:07:00.792 ************************************ 00:07:00.792 00:22:46 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:01.051 * Looking for test storage... 00:07:01.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:01.051 00:22:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.051 00:22:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.051 00:22:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.051 00:22:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.051 00:22:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.051 00:22:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.051 00:22:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.051 00:22:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.051 00:22:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.051 00:22:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.051 00:22:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.051 00:22:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:01.051 00:22:46 thread -- scripts/common.sh@345 -- # : 1 00:07:01.051 00:22:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.051 00:22:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.051 00:22:46 thread -- scripts/common.sh@365 -- # decimal 1 00:07:01.051 00:22:46 thread -- scripts/common.sh@353 -- # local d=1 00:07:01.051 00:22:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.051 00:22:46 thread -- scripts/common.sh@355 -- # echo 1 00:07:01.051 00:22:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.051 00:22:46 thread -- scripts/common.sh@366 -- # decimal 2 00:07:01.051 00:22:46 thread -- scripts/common.sh@353 -- # local d=2 00:07:01.051 00:22:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.051 00:22:46 thread -- scripts/common.sh@355 -- # echo 2 00:07:01.051 00:22:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.051 00:22:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.051 00:22:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.051 00:22:46 thread -- scripts/common.sh@368 -- # return 0 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.051 --rc genhtml_branch_coverage=1 00:07:01.051 --rc genhtml_function_coverage=1 00:07:01.051 --rc genhtml_legend=1 00:07:01.051 --rc geninfo_all_blocks=1 00:07:01.051 --rc geninfo_unexecuted_blocks=1 00:07:01.051 00:07:01.051 ' 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.051 --rc genhtml_branch_coverage=1 00:07:01.051 --rc genhtml_function_coverage=1 00:07:01.051 --rc genhtml_legend=1 00:07:01.051 --rc geninfo_all_blocks=1 00:07:01.051 --rc geninfo_unexecuted_blocks=1 00:07:01.051 00:07:01.051 ' 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.051 --rc genhtml_branch_coverage=1 00:07:01.051 --rc genhtml_function_coverage=1 00:07:01.051 --rc genhtml_legend=1 00:07:01.051 --rc geninfo_all_blocks=1 00:07:01.051 --rc geninfo_unexecuted_blocks=1 00:07:01.051 00:07:01.051 ' 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:01.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.051 --rc genhtml_branch_coverage=1 00:07:01.051 --rc genhtml_function_coverage=1 00:07:01.051 --rc genhtml_legend=1 00:07:01.051 --rc geninfo_all_blocks=1 00:07:01.051 --rc geninfo_unexecuted_blocks=1 00:07:01.051 00:07:01.051 ' 00:07:01.051 00:22:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.051 00:22:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.051 ************************************ 00:07:01.051 START TEST thread_poller_perf 00:07:01.051 ************************************ 00:07:01.051 00:22:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.051 [2024-12-17 00:22:46.975513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:01.051 [2024-12-17 00:22:46.975622] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:07:01.310 [2024-12-17 00:22:47.110681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.310 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.310 [2024-12-17 00:22:47.141287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.245 [2024-12-17T00:22:48.248Z] ====================================== 00:07:02.245 [2024-12-17T00:22:48.248Z] busy:2209183350 (cyc) 00:07:02.245 [2024-12-17T00:22:48.248Z] total_run_count: 372000 00:07:02.245 [2024-12-17T00:22:48.248Z] tsc_hz: 2200000000 (cyc) 00:07:02.245 [2024-12-17T00:22:48.248Z] ====================================== 00:07:02.245 [2024-12-17T00:22:48.248Z] poller_cost: 5938 (cyc), 2699 (nsec) 00:07:02.245 00:07:02.245 real 0m1.238s 00:07:02.245 user 0m1.089s 00:07:02.245 sys 0m0.043s 00:07:02.245 00:22:48 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.245 ************************************ 00:07:02.245 END TEST thread_poller_perf 00:07:02.245 ************************************ 00:07:02.245 00:22:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.245 00:22:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.245 00:22:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:02.245 00:22:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.245 00:22:48 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.504 ************************************ 00:07:02.504 START TEST thread_poller_perf 00:07:02.504 ************************************ 00:07:02.504 00:22:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.504 [2024-12-17 00:22:48.265016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:02.504 [2024-12-17 00:22:48.265107] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71314 ] 00:07:02.504 [2024-12-17 00:22:48.400814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.504 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.504 [2024-12-17 00:22:48.430951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.881 [2024-12-17T00:22:49.884Z] ====================================== 00:07:03.881 [2024-12-17T00:22:49.884Z] busy:2201941962 (cyc) 00:07:03.881 [2024-12-17T00:22:49.884Z] total_run_count: 5024000 00:07:03.881 [2024-12-17T00:22:49.884Z] tsc_hz: 2200000000 (cyc) 00:07:03.881 [2024-12-17T00:22:49.884Z] ====================================== 00:07:03.881 [2024-12-17T00:22:49.884Z] poller_cost: 438 (cyc), 199 (nsec) 00:07:03.881 00:07:03.881 real 0m1.234s 00:07:03.881 user 0m1.099s 00:07:03.881 sys 0m0.030s 00:07:03.881 00:22:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.881 ************************************ 00:07:03.881 END TEST thread_poller_perf 00:07:03.881 ************************************ 00:07:03.881 00:22:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.881 00:22:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.881 00:07:03.881 real 0m2.759s 00:07:03.881 user 0m2.342s 00:07:03.881 sys 0m0.203s 00:07:03.881 00:22:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.881 00:22:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.881 ************************************ 00:07:03.881 END TEST thread 00:07:03.881 ************************************ 00:07:03.881 00:22:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:03.881 00:22:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:03.881 00:22:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.881 00:22:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.881 00:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:03.881 ************************************ 00:07:03.881 START TEST app_cmdline 00:07:03.881 ************************************ 00:07:03.881 00:22:49 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:03.881 * Looking for test storage... 00:07:03.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:03.881 00:22:49 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.881 00:22:49 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.881 00:22:49 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.881 00:22:49 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.881 00:22:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.882 00:22:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.882 --rc genhtml_branch_coverage=1 00:07:03.882 --rc genhtml_function_coverage=1 00:07:03.882 --rc genhtml_legend=1 00:07:03.882 --rc geninfo_all_blocks=1 00:07:03.882 --rc geninfo_unexecuted_blocks=1 00:07:03.882 00:07:03.882 ' 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.882 --rc genhtml_branch_coverage=1 00:07:03.882 --rc genhtml_function_coverage=1 00:07:03.882 --rc genhtml_legend=1 00:07:03.882 --rc geninfo_all_blocks=1 00:07:03.882 --rc geninfo_unexecuted_blocks=1 00:07:03.882 00:07:03.882 ' 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.882 --rc genhtml_branch_coverage=1 00:07:03.882 --rc genhtml_function_coverage=1 00:07:03.882 --rc genhtml_legend=1 00:07:03.882 --rc geninfo_all_blocks=1 00:07:03.882 --rc geninfo_unexecuted_blocks=1 00:07:03.882 00:07:03.882 ' 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.882 --rc genhtml_branch_coverage=1 00:07:03.882 --rc genhtml_function_coverage=1 00:07:03.882 --rc genhtml_legend=1 00:07:03.882 --rc geninfo_all_blocks=1 00:07:03.882 --rc geninfo_unexecuted_blocks=1 00:07:03.882 00:07:03.882 ' 00:07:03.882 00:22:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:03.882 00:22:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71397 00:07:03.882 00:22:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:03.882 00:22:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71397 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71397 ']' 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.882 00:22:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.882 [2024-12-17 00:22:49.817543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:03.882 [2024-12-17 00:22:49.817652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71397 ] 00:07:04.141 [2024-12-17 00:22:49.954249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.141 [2024-12-17 00:22:49.986183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.141 [2024-12-17 00:22:50.020663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.141 00:22:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.141 00:22:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:04.141 00:22:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:04.400 { 00:07:04.400 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:04.400 "fields": { 00:07:04.400 "major": 24, 00:07:04.400 "minor": 9, 00:07:04.400 "patch": 1, 00:07:04.400 "suffix": "-pre", 00:07:04.400 "commit": "b18e1bd62" 00:07:04.400 } 00:07:04.400 } 00:07:04.400 00:22:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.400 00:22:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.400 00:22:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.400 00:22:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.400 00:22:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.400 00:22:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.400 00:22:50 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.400 00:22:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.400 00:22:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.400 00:22:50 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.659 00:22:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.659 00:22:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.659 00:22:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:04.659 00:22:50 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.918 request: 00:07:04.918 { 00:07:04.918 "method": "env_dpdk_get_mem_stats", 00:07:04.918 "req_id": 1 00:07:04.918 } 00:07:04.918 Got JSON-RPC error response 00:07:04.918 response: 00:07:04.918 { 00:07:04.918 "code": -32601, 00:07:04.918 "message": "Method not found" 00:07:04.918 } 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.918 00:22:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71397 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71397 ']' 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71397 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71397 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.918 killing process with pid 71397 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71397' 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@969 -- # kill 71397 00:07:04.918 00:22:50 app_cmdline -- common/autotest_common.sh@974 -- # wait 71397 00:07:05.177 00:07:05.177 real 0m1.381s 00:07:05.177 user 0m1.832s 00:07:05.177 sys 0m0.344s 00:07:05.177 00:22:50 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.177 00:22:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.177 ************************************ 00:07:05.177 END TEST app_cmdline 00:07:05.177 ************************************ 00:07:05.177 00:22:50 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.177 00:22:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.177 00:22:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.177 00:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:05.177 ************************************ 00:07:05.177 START TEST version 00:07:05.177 ************************************ 00:07:05.177 00:22:50 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.177 * Looking for test storage... 00:07:05.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:05.177 00:22:51 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.177 00:22:51 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.177 00:22:51 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.177 00:22:51 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.177 00:22:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.177 00:22:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.177 00:22:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.177 00:22:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.177 00:22:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.177 00:22:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.177 00:22:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.177 00:22:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.177 00:22:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.177 00:22:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.177 00:22:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.177 00:22:51 version -- scripts/common.sh@344 -- # case "$op" in 00:07:05.177 00:22:51 version -- scripts/common.sh@345 -- # : 1 00:07:05.177 00:22:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.177 00:22:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.177 00:22:51 version -- scripts/common.sh@365 -- # decimal 1 00:07:05.177 00:22:51 version -- scripts/common.sh@353 -- # local d=1 00:07:05.177 00:22:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.177 00:22:51 version -- scripts/common.sh@355 -- # echo 1 00:07:05.177 00:22:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.436 00:22:51 version -- scripts/common.sh@366 -- # decimal 2 00:07:05.436 00:22:51 version -- scripts/common.sh@353 -- # local d=2 00:07:05.436 00:22:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.436 00:22:51 version -- scripts/common.sh@355 -- # echo 2 00:07:05.436 00:22:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.436 00:22:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.436 00:22:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.436 00:22:51 version -- scripts/common.sh@368 -- # return 0 00:07:05.436 00:22:51 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.436 00:22:51 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.436 --rc genhtml_branch_coverage=1 00:07:05.436 --rc genhtml_function_coverage=1 00:07:05.436 --rc genhtml_legend=1 00:07:05.436 --rc geninfo_all_blocks=1 00:07:05.436 --rc geninfo_unexecuted_blocks=1 00:07:05.436 00:07:05.436 ' 00:07:05.436 00:22:51 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.436 --rc genhtml_branch_coverage=1 00:07:05.436 --rc genhtml_function_coverage=1 00:07:05.436 --rc genhtml_legend=1 00:07:05.436 --rc geninfo_all_blocks=1 00:07:05.436 --rc geninfo_unexecuted_blocks=1 00:07:05.436 00:07:05.436 ' 00:07:05.436 00:22:51 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.436 --rc genhtml_branch_coverage=1 00:07:05.436 --rc genhtml_function_coverage=1 00:07:05.436 --rc genhtml_legend=1 00:07:05.436 --rc geninfo_all_blocks=1 00:07:05.436 --rc geninfo_unexecuted_blocks=1 00:07:05.436 00:07:05.436 ' 00:07:05.436 00:22:51 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.436 --rc genhtml_branch_coverage=1 00:07:05.436 --rc genhtml_function_coverage=1 00:07:05.436 --rc genhtml_legend=1 00:07:05.436 --rc geninfo_all_blocks=1 00:07:05.436 --rc geninfo_unexecuted_blocks=1 00:07:05.436 00:07:05.437 ' 00:07:05.437 00:22:51 version -- app/version.sh@17 -- # get_header_version major 00:07:05.437 00:22:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # cut -f2 00:07:05.437 00:22:51 version -- app/version.sh@17 -- # major=24 00:07:05.437 00:22:51 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.437 00:22:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # cut -f2 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.437 00:22:51 version -- app/version.sh@18 -- # minor=9 00:07:05.437 00:22:51 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.437 00:22:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # cut -f2 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.437 00:22:51 version -- app/version.sh@19 -- # patch=1 00:07:05.437 00:22:51 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.437 00:22:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # cut -f2 00:07:05.437 00:22:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.437 00:22:51 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.437 00:22:51 version -- app/version.sh@22 -- # version=24.9 00:07:05.437 00:22:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.437 00:22:51 version -- app/version.sh@25 -- # version=24.9.1 00:07:05.437 00:22:51 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:05.437 00:22:51 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:05.437 00:22:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.437 00:22:51 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:05.437 00:22:51 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:05.437 00:07:05.437 real 0m0.259s 00:07:05.437 user 0m0.181s 00:07:05.437 sys 0m0.113s 00:07:05.437 00:22:51 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.437 00:22:51 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.437 ************************************ 00:07:05.437 END TEST version 00:07:05.437 ************************************ 00:07:05.437 00:22:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:05.437 00:22:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:05.437 00:22:51 -- spdk/autotest.sh@194 -- # uname -s 00:07:05.437 00:22:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:05.437 00:22:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.437 00:22:51 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:05.437 00:22:51 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:05.437 00:22:51 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:05.437 00:22:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.437 00:22:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.437 00:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:05.437 ************************************ 00:07:05.437 START TEST spdk_dd 00:07:05.437 ************************************ 00:07:05.437 00:22:51 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:05.437 * Looking for test storage... 00:07:05.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.437 00:22:51 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.437 00:22:51 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.437 00:22:51 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.696 00:22:51 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:05.696 00:22:51 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.696 00:22:51 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.696 --rc genhtml_branch_coverage=1 00:07:05.696 --rc genhtml_function_coverage=1 00:07:05.696 --rc genhtml_legend=1 00:07:05.696 --rc geninfo_all_blocks=1 00:07:05.696 --rc geninfo_unexecuted_blocks=1 00:07:05.696 00:07:05.696 ' 00:07:05.696 00:22:51 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.696 --rc genhtml_branch_coverage=1 00:07:05.696 --rc genhtml_function_coverage=1 00:07:05.696 --rc genhtml_legend=1 00:07:05.696 --rc geninfo_all_blocks=1 00:07:05.696 --rc geninfo_unexecuted_blocks=1 00:07:05.696 00:07:05.696 ' 00:07:05.696 00:22:51 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.696 --rc genhtml_branch_coverage=1 00:07:05.696 --rc genhtml_function_coverage=1 00:07:05.696 --rc genhtml_legend=1 00:07:05.696 --rc geninfo_all_blocks=1 00:07:05.696 --rc geninfo_unexecuted_blocks=1 00:07:05.696 00:07:05.696 ' 00:07:05.696 00:22:51 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.696 --rc genhtml_branch_coverage=1 00:07:05.696 --rc genhtml_function_coverage=1 00:07:05.696 --rc genhtml_legend=1 00:07:05.696 --rc geninfo_all_blocks=1 00:07:05.696 --rc geninfo_unexecuted_blocks=1 00:07:05.696 00:07:05.696 ' 00:07:05.696 00:22:51 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.696 00:22:51 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.696 00:22:51 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.696 00:22:51 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.696 00:22:51 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.697 00:22:51 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:05.697 00:22:51 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.697 00:22:51 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:05.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:05.956 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:05.956 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:05.956 00:22:51 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:05.956 00:22:51 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:05.956 00:22:51 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:05.957 00:22:51 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:05.957 00:22:51 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:05.957 00:22:51 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:05.957 00:22:51 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:05.957 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.218 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:06.218 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.218 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:06.218 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.218 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:06.219 * spdk_dd linked to liburing 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:06.219 00:22:51 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:06.219 00:22:51 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:06.220 00:22:51 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:06.220 00:22:51 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:06.220 00:22:51 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:06.220 00:22:51 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:06.220 00:22:51 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:06.220 00:22:51 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:06.220 00:22:51 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:06.220 00:22:51 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:06.220 00:22:51 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.220 00:22:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:06.220 ************************************ 00:07:06.220 START TEST spdk_dd_basic_rw 00:07:06.220 ************************************ 00:07:06.220 00:22:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:06.220 * Looking for test storage... 00:07:06.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.220 --rc genhtml_branch_coverage=1 00:07:06.220 --rc genhtml_function_coverage=1 00:07:06.220 --rc genhtml_legend=1 00:07:06.220 --rc geninfo_all_blocks=1 00:07:06.220 --rc geninfo_unexecuted_blocks=1 00:07:06.220 00:07:06.220 ' 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.220 --rc genhtml_branch_coverage=1 00:07:06.220 --rc genhtml_function_coverage=1 00:07:06.220 --rc genhtml_legend=1 00:07:06.220 --rc geninfo_all_blocks=1 00:07:06.220 --rc geninfo_unexecuted_blocks=1 00:07:06.220 00:07:06.220 ' 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.220 --rc genhtml_branch_coverage=1 00:07:06.220 --rc genhtml_function_coverage=1 00:07:06.220 --rc genhtml_legend=1 00:07:06.220 --rc geninfo_all_blocks=1 00:07:06.220 --rc geninfo_unexecuted_blocks=1 00:07:06.220 00:07:06.220 ' 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.220 --rc genhtml_branch_coverage=1 00:07:06.220 --rc genhtml_function_coverage=1 00:07:06.220 --rc genhtml_legend=1 00:07:06.220 --rc geninfo_all_blocks=1 00:07:06.220 --rc geninfo_unexecuted_blocks=1 00:07:06.220 00:07:06.220 ' 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.220 00:22:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:06.221 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:06.482 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:06.482 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.483 ************************************ 00:07:06.483 START TEST dd_bs_lt_native_bs 00:07:06.483 ************************************ 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.483 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:06.483 { 00:07:06.483 "subsystems": [ 00:07:06.483 { 00:07:06.483 "subsystem": "bdev", 00:07:06.483 "config": [ 00:07:06.483 { 00:07:06.483 "params": { 00:07:06.483 "trtype": "pcie", 00:07:06.483 "traddr": "0000:00:10.0", 00:07:06.483 "name": "Nvme0" 00:07:06.483 }, 00:07:06.483 "method": "bdev_nvme_attach_controller" 00:07:06.483 }, 00:07:06.483 { 00:07:06.483 "method": "bdev_wait_for_examine" 00:07:06.483 } 00:07:06.483 ] 00:07:06.483 } 00:07:06.483 ] 00:07:06.483 } 00:07:06.483 [2024-12-17 00:22:52.426794] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:06.483 [2024-12-17 00:22:52.426890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71735 ] 00:07:06.743 [2024-12-17 00:22:52.554037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.743 [2024-12-17 00:22:52.585502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.743 [2024-12-17 00:22:52.611720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.743 [2024-12-17 00:22:52.698633] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:06.743 [2024-12-17 00:22:52.698713] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.002 [2024-12-17 00:22:52.760848] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.002 00:07:07.002 real 0m0.445s 00:07:07.002 user 0m0.291s 00:07:07.002 sys 0m0.110s 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.002 ************************************ 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:07.002 END TEST dd_bs_lt_native_bs 00:07:07.002 ************************************ 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.002 ************************************ 00:07:07.002 START TEST dd_rw 00:07:07.002 ************************************ 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:07.002 00:22:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.570 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:07.570 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:07.570 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:07.570 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:07.570 [2024-12-17 00:22:53.523171] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:07.570 [2024-12-17 00:22:53.523289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71766 ] 00:07:07.570 { 00:07:07.570 "subsystems": [ 00:07:07.570 { 00:07:07.570 "subsystem": "bdev", 00:07:07.570 "config": [ 00:07:07.570 { 00:07:07.570 "params": { 00:07:07.570 "trtype": "pcie", 00:07:07.570 "traddr": "0000:00:10.0", 00:07:07.570 "name": "Nvme0" 00:07:07.570 }, 00:07:07.570 "method": "bdev_nvme_attach_controller" 00:07:07.570 }, 00:07:07.570 { 00:07:07.570 "method": "bdev_wait_for_examine" 00:07:07.570 } 00:07:07.570 ] 00:07:07.570 } 00:07:07.570 ] 00:07:07.570 } 00:07:07.829 [2024-12-17 00:22:53.663766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.829 [2024-12-17 00:22:53.704368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.829 [2024-12-17 00:22:53.736414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.829  [2024-12-17T00:22:54.091Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:08.088 00:07:08.088 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:08.088 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:08.088 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.088 00:22:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.088 [2024-12-17 00:22:54.005887] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.088 [2024-12-17 00:22:54.006025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71780 ] 00:07:08.088 { 00:07:08.088 "subsystems": [ 00:07:08.088 { 00:07:08.088 "subsystem": "bdev", 00:07:08.088 "config": [ 00:07:08.088 { 00:07:08.089 "params": { 00:07:08.089 "trtype": "pcie", 00:07:08.089 "traddr": "0000:00:10.0", 00:07:08.089 "name": "Nvme0" 00:07:08.089 }, 00:07:08.089 "method": "bdev_nvme_attach_controller" 00:07:08.089 }, 00:07:08.089 { 00:07:08.089 "method": "bdev_wait_for_examine" 00:07:08.089 } 00:07:08.089 ] 00:07:08.089 } 00:07:08.089 ] 00:07:08.089 } 00:07:08.348 [2024-12-17 00:22:54.139981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.348 [2024-12-17 00:22:54.175454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.348 [2024-12-17 00:22:54.203777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.348  [2024-12-17T00:22:54.610Z] Copying: 60/60 [kB] (average 14 MBps) 00:07:08.607 00:07:08.607 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.607 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:08.607 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:08.607 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:08.607 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:08.607 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:08.607 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:08.608 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:08.608 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:08.608 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.608 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.608 { 00:07:08.608 "subsystems": [ 00:07:08.608 { 00:07:08.608 "subsystem": "bdev", 00:07:08.608 "config": [ 00:07:08.608 { 00:07:08.608 "params": { 00:07:08.608 "trtype": "pcie", 00:07:08.608 "traddr": "0000:00:10.0", 00:07:08.608 "name": "Nvme0" 00:07:08.608 }, 00:07:08.608 "method": "bdev_nvme_attach_controller" 00:07:08.608 }, 00:07:08.608 { 00:07:08.608 "method": "bdev_wait_for_examine" 00:07:08.608 } 00:07:08.608 ] 00:07:08.608 } 00:07:08.608 ] 00:07:08.608 } 00:07:08.608 [2024-12-17 00:22:54.475000] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.608 [2024-12-17 00:22:54.475104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71795 ] 00:07:08.867 [2024-12-17 00:22:54.613201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.867 [2024-12-17 00:22:54.645098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.867 [2024-12-17 00:22:54.673586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.867  [2024-12-17T00:22:55.129Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:09.126 00:07:09.126 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:09.126 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:09.126 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:09.126 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:09.126 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:09.126 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:09.126 00:22:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.384 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:09.384 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:09.384 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.384 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.642 [2024-12-17 00:22:55.421043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:09.642 [2024-12-17 00:22:55.421178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71814 ] 00:07:09.642 { 00:07:09.642 "subsystems": [ 00:07:09.642 { 00:07:09.642 "subsystem": "bdev", 00:07:09.642 "config": [ 00:07:09.643 { 00:07:09.643 "params": { 00:07:09.643 "trtype": "pcie", 00:07:09.643 "traddr": "0000:00:10.0", 00:07:09.643 "name": "Nvme0" 00:07:09.643 }, 00:07:09.643 "method": "bdev_nvme_attach_controller" 00:07:09.643 }, 00:07:09.643 { 00:07:09.643 "method": "bdev_wait_for_examine" 00:07:09.643 } 00:07:09.643 ] 00:07:09.643 } 00:07:09.643 ] 00:07:09.643 } 00:07:09.643 [2024-12-17 00:22:55.556902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.643 [2024-12-17 00:22:55.590001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.643 [2024-12-17 00:22:55.618488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.901  [2024-12-17T00:22:55.904Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:09.901 00:07:09.901 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:09.901 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:09.901 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.901 00:22:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.901 [2024-12-17 00:22:55.887256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:09.901 [2024-12-17 00:22:55.887884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71828 ] 00:07:09.901 { 00:07:09.901 "subsystems": [ 00:07:09.901 { 00:07:09.901 "subsystem": "bdev", 00:07:09.901 "config": [ 00:07:09.901 { 00:07:09.901 "params": { 00:07:09.901 "trtype": "pcie", 00:07:09.901 "traddr": "0000:00:10.0", 00:07:09.901 "name": "Nvme0" 00:07:09.901 }, 00:07:09.901 "method": "bdev_nvme_attach_controller" 00:07:09.901 }, 00:07:09.901 { 00:07:09.901 "method": "bdev_wait_for_examine" 00:07:09.901 } 00:07:09.901 ] 00:07:09.901 } 00:07:09.901 ] 00:07:09.901 } 00:07:10.160 [2024-12-17 00:22:56.024153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.160 [2024-12-17 00:22:56.054742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.160 [2024-12-17 00:22:56.080898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.419  [2024-12-17T00:22:56.422Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:10.419 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.419 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.419 [2024-12-17 00:22:56.356764] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:10.419 [2024-12-17 00:22:56.356869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71843 ] 00:07:10.419 { 00:07:10.419 "subsystems": [ 00:07:10.419 { 00:07:10.419 "subsystem": "bdev", 00:07:10.419 "config": [ 00:07:10.419 { 00:07:10.419 "params": { 00:07:10.419 "trtype": "pcie", 00:07:10.419 "traddr": "0000:00:10.0", 00:07:10.420 "name": "Nvme0" 00:07:10.420 }, 00:07:10.420 "method": "bdev_nvme_attach_controller" 00:07:10.420 }, 00:07:10.420 { 00:07:10.420 "method": "bdev_wait_for_examine" 00:07:10.420 } 00:07:10.420 ] 00:07:10.420 } 00:07:10.420 ] 00:07:10.420 } 00:07:10.679 [2024-12-17 00:22:56.492467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.679 [2024-12-17 00:22:56.523116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.679 [2024-12-17 00:22:56.549312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.679  [2024-12-17T00:22:56.941Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:10.938 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:10.938 00:22:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.506 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:11.506 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:11.506 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.506 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.506 { 00:07:11.506 "subsystems": [ 00:07:11.506 { 00:07:11.506 "subsystem": "bdev", 00:07:11.506 "config": [ 00:07:11.506 { 00:07:11.506 "params": { 00:07:11.506 "trtype": "pcie", 00:07:11.506 "traddr": "0000:00:10.0", 00:07:11.506 "name": "Nvme0" 00:07:11.506 }, 00:07:11.506 "method": "bdev_nvme_attach_controller" 00:07:11.506 }, 00:07:11.506 { 00:07:11.506 "method": "bdev_wait_for_examine" 00:07:11.506 } 00:07:11.506 ] 00:07:11.506 } 00:07:11.506 ] 00:07:11.506 } 00:07:11.506 [2024-12-17 00:22:57.295167] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:11.506 [2024-12-17 00:22:57.295276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71862 ] 00:07:11.506 [2024-12-17 00:22:57.430543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.506 [2024-12-17 00:22:57.461261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.506 [2024-12-17 00:22:57.487571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.765  [2024-12-17T00:22:57.768Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:11.765 00:07:11.765 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:11.765 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:11.765 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.765 00:22:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.024 { 00:07:12.025 "subsystems": [ 00:07:12.025 { 00:07:12.025 "subsystem": "bdev", 00:07:12.025 "config": [ 00:07:12.025 { 00:07:12.025 "params": { 00:07:12.025 "trtype": "pcie", 00:07:12.025 "traddr": "0000:00:10.0", 00:07:12.025 "name": "Nvme0" 00:07:12.025 }, 00:07:12.025 "method": "bdev_nvme_attach_controller" 00:07:12.025 }, 00:07:12.025 { 00:07:12.025 "method": "bdev_wait_for_examine" 00:07:12.025 } 00:07:12.025 ] 00:07:12.025 } 00:07:12.025 ] 00:07:12.025 } 00:07:12.025 [2024-12-17 00:22:57.776743] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:12.025 [2024-12-17 00:22:57.776860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71870 ] 00:07:12.025 [2024-12-17 00:22:57.913338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.025 [2024-12-17 00:22:57.946346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.025 [2024-12-17 00:22:57.972586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.283  [2024-12-17T00:22:58.286Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:12.283 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.283 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.283 [2024-12-17 00:22:58.243763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:12.283 [2024-12-17 00:22:58.243864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71886 ] 00:07:12.283 { 00:07:12.283 "subsystems": [ 00:07:12.283 { 00:07:12.283 "subsystem": "bdev", 00:07:12.283 "config": [ 00:07:12.283 { 00:07:12.283 "params": { 00:07:12.283 "trtype": "pcie", 00:07:12.283 "traddr": "0000:00:10.0", 00:07:12.283 "name": "Nvme0" 00:07:12.283 }, 00:07:12.283 "method": "bdev_nvme_attach_controller" 00:07:12.283 }, 00:07:12.283 { 00:07:12.283 "method": "bdev_wait_for_examine" 00:07:12.283 } 00:07:12.283 ] 00:07:12.283 } 00:07:12.283 ] 00:07:12.283 } 00:07:12.543 [2024-12-17 00:22:58.381503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.543 [2024-12-17 00:22:58.414073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.543 [2024-12-17 00:22:58.441115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.543  [2024-12-17T00:22:58.804Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:12.801 00:07:12.801 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:12.801 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:12.801 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:12.801 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:12.801 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:12.801 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:12.802 00:22:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.369 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:13.369 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:13.369 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.369 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.369 { 00:07:13.369 "subsystems": [ 00:07:13.369 { 00:07:13.369 "subsystem": "bdev", 00:07:13.369 "config": [ 00:07:13.369 { 00:07:13.369 "params": { 00:07:13.369 "trtype": "pcie", 00:07:13.369 "traddr": "0000:00:10.0", 00:07:13.369 "name": "Nvme0" 00:07:13.369 }, 00:07:13.369 "method": "bdev_nvme_attach_controller" 00:07:13.369 }, 00:07:13.369 { 00:07:13.369 "method": "bdev_wait_for_examine" 00:07:13.369 } 00:07:13.369 ] 00:07:13.369 } 00:07:13.369 ] 00:07:13.369 } 00:07:13.369 [2024-12-17 00:22:59.276178] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:13.369 [2024-12-17 00:22:59.276331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71905 ] 00:07:13.628 [2024-12-17 00:22:59.413509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.628 [2024-12-17 00:22:59.444983] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.628 [2024-12-17 00:22:59.471198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.628  [2024-12-17T00:22:59.890Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:13.887 00:07:13.887 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:13.887 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:13.887 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.887 00:22:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.887 [2024-12-17 00:22:59.734270] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:13.887 [2024-12-17 00:22:59.734384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71918 ] 00:07:13.887 { 00:07:13.887 "subsystems": [ 00:07:13.887 { 00:07:13.887 "subsystem": "bdev", 00:07:13.887 "config": [ 00:07:13.887 { 00:07:13.887 "params": { 00:07:13.887 "trtype": "pcie", 00:07:13.887 "traddr": "0000:00:10.0", 00:07:13.887 "name": "Nvme0" 00:07:13.887 }, 00:07:13.887 "method": "bdev_nvme_attach_controller" 00:07:13.887 }, 00:07:13.887 { 00:07:13.887 "method": "bdev_wait_for_examine" 00:07:13.887 } 00:07:13.887 ] 00:07:13.887 } 00:07:13.887 ] 00:07:13.887 } 00:07:13.887 [2024-12-17 00:22:59.868984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.146 [2024-12-17 00:22:59.901408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.146 [2024-12-17 00:22:59.929023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.146  [2024-12-17T00:23:00.149Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:14.146 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.405 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.405 [2024-12-17 00:23:00.212605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:14.405 [2024-12-17 00:23:00.212721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71934 ] 00:07:14.405 { 00:07:14.405 "subsystems": [ 00:07:14.405 { 00:07:14.405 "subsystem": "bdev", 00:07:14.405 "config": [ 00:07:14.405 { 00:07:14.405 "params": { 00:07:14.405 "trtype": "pcie", 00:07:14.405 "traddr": "0000:00:10.0", 00:07:14.405 "name": "Nvme0" 00:07:14.405 }, 00:07:14.405 "method": "bdev_nvme_attach_controller" 00:07:14.405 }, 00:07:14.405 { 00:07:14.405 "method": "bdev_wait_for_examine" 00:07:14.405 } 00:07:14.405 ] 00:07:14.405 } 00:07:14.405 ] 00:07:14.405 } 00:07:14.405 [2024-12-17 00:23:00.345867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.405 [2024-12-17 00:23:00.379155] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.664 [2024-12-17 00:23:00.406942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.664  [2024-12-17T00:23:00.667Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:14.664 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:14.664 00:23:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.230 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:15.230 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:15.230 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.230 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.230 [2024-12-17 00:23:01.133830] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:15.230 [2024-12-17 00:23:01.133935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71953 ] 00:07:15.230 { 00:07:15.230 "subsystems": [ 00:07:15.230 { 00:07:15.230 "subsystem": "bdev", 00:07:15.230 "config": [ 00:07:15.230 { 00:07:15.230 "params": { 00:07:15.230 "trtype": "pcie", 00:07:15.230 "traddr": "0000:00:10.0", 00:07:15.230 "name": "Nvme0" 00:07:15.230 }, 00:07:15.230 "method": "bdev_nvme_attach_controller" 00:07:15.230 }, 00:07:15.230 { 00:07:15.230 "method": "bdev_wait_for_examine" 00:07:15.230 } 00:07:15.230 ] 00:07:15.230 } 00:07:15.230 ] 00:07:15.230 } 00:07:15.489 [2024-12-17 00:23:01.269383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.489 [2024-12-17 00:23:01.301423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.489 [2024-12-17 00:23:01.330790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.489  [2024-12-17T00:23:01.751Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:15.748 00:07:15.748 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:15.748 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:15.748 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.748 00:23:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.748 [2024-12-17 00:23:01.613959] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:15.748 [2024-12-17 00:23:01.614061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:07:15.748 { 00:07:15.748 "subsystems": [ 00:07:15.748 { 00:07:15.748 "subsystem": "bdev", 00:07:15.748 "config": [ 00:07:15.748 { 00:07:15.748 "params": { 00:07:15.748 "trtype": "pcie", 00:07:15.748 "traddr": "0000:00:10.0", 00:07:15.748 "name": "Nvme0" 00:07:15.748 }, 00:07:15.748 "method": "bdev_nvme_attach_controller" 00:07:15.748 }, 00:07:15.748 { 00:07:15.748 "method": "bdev_wait_for_examine" 00:07:15.748 } 00:07:15.748 ] 00:07:15.748 } 00:07:15.748 ] 00:07:15.748 } 00:07:16.007 [2024-12-17 00:23:01.750136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.007 [2024-12-17 00:23:01.784496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.007 [2024-12-17 00:23:01.813353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.007  [2024-12-17T00:23:02.269Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:16.266 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.266 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.266 { 00:07:16.266 "subsystems": [ 00:07:16.266 { 00:07:16.266 "subsystem": "bdev", 00:07:16.266 "config": [ 00:07:16.266 { 00:07:16.266 "params": { 00:07:16.266 "trtype": "pcie", 00:07:16.266 "traddr": "0000:00:10.0", 00:07:16.266 "name": "Nvme0" 00:07:16.266 }, 00:07:16.266 "method": "bdev_nvme_attach_controller" 00:07:16.266 }, 00:07:16.266 { 00:07:16.266 "method": "bdev_wait_for_examine" 00:07:16.266 } 00:07:16.266 ] 00:07:16.266 } 00:07:16.266 ] 00:07:16.266 } 00:07:16.266 [2024-12-17 00:23:02.111065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:16.266 [2024-12-17 00:23:02.111158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71982 ] 00:07:16.266 [2024-12-17 00:23:02.247141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.525 [2024-12-17 00:23:02.281927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.525 [2024-12-17 00:23:02.310529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.525  [2024-12-17T00:23:02.787Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:16.784 00:07:16.784 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:16.784 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:16.784 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:16.784 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:16.784 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:16.784 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:16.784 00:23:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.042 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:17.042 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:17.042 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.042 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.301 [2024-12-17 00:23:03.062598] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:17.301 [2024-12-17 00:23:03.062717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72001 ] 00:07:17.301 { 00:07:17.301 "subsystems": [ 00:07:17.301 { 00:07:17.301 "subsystem": "bdev", 00:07:17.301 "config": [ 00:07:17.301 { 00:07:17.301 "params": { 00:07:17.301 "trtype": "pcie", 00:07:17.301 "traddr": "0000:00:10.0", 00:07:17.301 "name": "Nvme0" 00:07:17.301 }, 00:07:17.301 "method": "bdev_nvme_attach_controller" 00:07:17.301 }, 00:07:17.301 { 00:07:17.301 "method": "bdev_wait_for_examine" 00:07:17.301 } 00:07:17.301 ] 00:07:17.301 } 00:07:17.301 ] 00:07:17.301 } 00:07:17.301 [2024-12-17 00:23:03.200012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.301 [2024-12-17 00:23:03.231043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.301 [2024-12-17 00:23:03.257508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.560  [2024-12-17T00:23:03.563Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:17.560 00:07:17.560 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:17.560 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:17.560 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.560 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.560 [2024-12-17 00:23:03.548972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:17.560 [2024-12-17 00:23:03.549074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72014 ] 00:07:17.560 { 00:07:17.560 "subsystems": [ 00:07:17.560 { 00:07:17.560 "subsystem": "bdev", 00:07:17.560 "config": [ 00:07:17.560 { 00:07:17.560 "params": { 00:07:17.560 "trtype": "pcie", 00:07:17.560 "traddr": "0000:00:10.0", 00:07:17.560 "name": "Nvme0" 00:07:17.560 }, 00:07:17.560 "method": "bdev_nvme_attach_controller" 00:07:17.560 }, 00:07:17.560 { 00:07:17.560 "method": "bdev_wait_for_examine" 00:07:17.560 } 00:07:17.560 ] 00:07:17.560 } 00:07:17.560 ] 00:07:17.560 } 00:07:17.819 [2024-12-17 00:23:03.684563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.819 [2024-12-17 00:23:03.715514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.819 [2024-12-17 00:23:03.741944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.078  [2024-12-17T00:23:04.081Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:18.078 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.078 00:23:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.078 [2024-12-17 00:23:04.012736] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:18.078 [2024-12-17 00:23:04.012855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72030 ] 00:07:18.078 { 00:07:18.078 "subsystems": [ 00:07:18.078 { 00:07:18.078 "subsystem": "bdev", 00:07:18.078 "config": [ 00:07:18.078 { 00:07:18.078 "params": { 00:07:18.078 "trtype": "pcie", 00:07:18.078 "traddr": "0000:00:10.0", 00:07:18.078 "name": "Nvme0" 00:07:18.078 }, 00:07:18.078 "method": "bdev_nvme_attach_controller" 00:07:18.078 }, 00:07:18.078 { 00:07:18.078 "method": "bdev_wait_for_examine" 00:07:18.078 } 00:07:18.078 ] 00:07:18.078 } 00:07:18.078 ] 00:07:18.078 } 00:07:18.337 [2024-12-17 00:23:04.149006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.337 [2024-12-17 00:23:04.180534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.337 [2024-12-17 00:23:04.208103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.337  [2024-12-17T00:23:04.599Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:18.596 00:07:18.596 00:07:18.596 real 0m11.568s 00:07:18.596 user 0m8.548s 00:07:18.596 sys 0m3.608s 00:07:18.596 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.596 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.596 ************************************ 00:07:18.596 END TEST dd_rw 00:07:18.596 ************************************ 00:07:18.596 00:23:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.597 ************************************ 00:07:18.597 START TEST dd_rw_offset 00:07:18.597 ************************************ 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=vftp8p6gihgjyf1dtnpus186zbzx4f29snjk3kwr432w2gldo3yw5ip6lxqshi8qmqxzwtlrjaubob2x5bvlva1b8fv7ywb1fueykakp3h0r38c4nw3spzl2bg0cpbmsn41u8q1f0ov5tqkdprjeqvpb1iegy6d13lvdh7sl2ndmsqngb9r3qry9ccyplif9jl6gyk0s3gz4lp7bfwjki3bg905ksdxdfpkztc4upu02hu4tpa1cfk5qe7jlptp0ivjlzw58zt9qifbtjgkj4fwcj1txlf4k3anwt3ny5ipzpfom924iy8vhfuxtzy3ra0qqa8pzfm0x56gr52yi71ix0tvntomuuafazzh1whv2tqy2oiszag7qo3boclnu3th7xptyxf55q682esfk79vpoav6fmpdtouqszrpcqxs8od55sauntjzi03qdl3hni8oj7a1qba0u9qt0vs71wscs8brt2h8rcfbze95h3v8w99tj8pmpm8bm9rt6u0q79d9dipgg5j9sofm71st3t2lhdb5pms9dfan93c5n0ejagi0kup04qcl3iwyabwv2qpqecatc7wlkbu5cu0t2btpc0ez4lk9u3jyqaw2ki3661dfxn4yqs14n03nbrh0ao1r5np2yedeub9bzux01mq0cj3gzsjo272gvvba1wj8kva0r69sb6q2insavmgev6tcozmvsllyfmeec9lryrhfew8w5rw7ch9740xhkwh9q58jo7kj3o0dzsqc0rssemyk53xo8v05kypdnm87j8ybcb1751c1aps3koe88su4ku2vto430klxqycy5vbudlpp1nskrodbw26vmghjyf67wta3wrc0woax3x8s3zpb9pd9nm3s942k57nkig98f6jrboxx6p9r49znccginzmzad0pinwqyo764xd2o8skaq4n4mwsn52zj6ua0d3lj7lwfc8uajce24f63r5t1hbsdyvd1i97gdrchfqqdv0v67d58hpelk55591i7ynp66u8jtfzxok2g7g9cemj9ek3cn2fbtw2ne6twbgpa7mej036ki1a7n8ttyjjdkde6jzccg0jf5rp8ifd9w4zqdxvoqcfq5oags2jb6mzt7022057956xcudlwmfpd1f9zrfla4h5oywrx3t4j2gkgn4xc8n3zerbf6s619e7m50r0ybzvtyah4kwrv0c1ni3exnml42zqsdm9395cjwrnhad1maeoc94dw222tlc4frzh2n2uba77l7xkggizem6e1646h9d16jdx9ypfdrmphh2gl9zxmxywuan8qxj4iianhzs7gx5quoglcxz3edma6ekz9ywgw7nb9eq37owpcr9siis1dxf9jas23i6mc8hvzskacit1ey7bo2fin56v2puqk47wx1o1jedyilgseh530u8rxzw5woa901wd7rnlpy2lgpyygfqujrm6hdgnic2gpy664e2l6vitr3o9pthp6lyfjntg1oc2fcfyr423ftf791awir9filtaqlhkqvwiz0f2yfiflq09lxw04ra7c1ej7kp670b9h4gwipjv4ev9hlyjh9k8zj6brgrwkgyhbbukm37n10p0ksqbik1rg9m63tqfb4w7u3694yqyxhpnqpkk8vydrfztu2n8ls24m1f6c0i73o8dgazmwowz4u8hj6gtel4l9flv3qen1p1bh2qyjl2qcdcsojh742lhhmt449aui644meumcxffi7ywfjqtoq45sc6rxnieq01xaorshbp5oglc8se2vusw6wj3wlc4d3tonh17fjp3tnuiictk6ijmc599ljo1cahodtttixhq57qeiupi2v1swtgo3onl566ujc1wrc9ftf73okroif6tto477mncz3lk3oqtiarmggx9byx01vbfdicliqpy0cfjl40w6bc85rnqs1aq16c4ifp6b5o6vf2idl3oflao3ri18rdhjivkjgj8wa2zn3amt45h7sddd1eyvvw29vilbuldml1ke8zb4run5xvpx1kcqvakvkt3sa72etcqvhm5l52jo1zx9aytmb3xqqk02eh89agkn2d069ce7g6ep6zcg9ypfucd6hekf76gt65dvv2f87ur3ita3nce0g5kp6enw0l7evsuwh7h01dqvsl6cugb793twqcddbekbhseyr95u7ejeob1crdtqn4t6wta0y3c8o7ljvwd5xy0j3401bacx9jdlowwnvykzsbefearltldr8ijtt14w0fjkigaubatw8qsnmfre120spp5ff5c32cavpax8qrusnw176tuxayvlx2ahejls0gid54niuqvq9yzcisp21kbhie3csac8i93u1asp24cvmojkapbuwtfdkylug1a7nng99bsh90bvyw9emgdifjix1x0ie7v9upk5hdcuz9280cfm0trteamcy099ifx5jq8pqw7bwwztzd38f75e7ilpamagwyb13r7p7jltfs9s26a40nnfba3bklmfcuwrws4xea7g89zej5kjzfbhqr10muqd6nqr0b3461b884c6ooiy6jg6we9ef074vt3r60v1u23l7apum8ug0ka6endysm74pc2wthetghnsk8shjj8porym1gchozth9gxksgbaksxta2w5uvtayrrhp16jfmdlgsogdmjptqzcw74j994mr3npsoictwu3y3h2t4bm9bk1idygr53oxz1vvuoobeoohiypdo55aqkotlwjfr6vlg5b11ygina4513s3cvcjpzdrgp44to3kob260x2cl9byne33q6pgy1f21y0c5k8sdqizdqegvslbrqyobkdahnhuk43zrfijgqeimplczn96jrf6ggueszkuvgjltd71zu24zucpz0oad2ness5iah5j87hc5w90iemhgvtagvtple421khmk9w3ovql04ymn7ob4q73scs5goopfmjc9x7chgtausf6nmmtdvrurqaxnedbag6mxgp8jongvqrrxeyt42zep206k20i1bjviccrrjyc5q1kwg9rp4jeniifddl6x61tprcepninjrnc0eb0tasot68ps5mjuea4euhxkkw2z9z4unfoxax7o459rv7wzmp544g0x3xbgn7xhbpha63zxycrlp042do7tbskdsrjseg9ca2xp4g9fw3klqyeqxhvbtu7dfhkqv16tcd9da7jeh7f4urzqkil7a0w97axtpuyw5alj8ugtwuthdr3qjz738a40h40uors9xl4o7lcevczcmx4l46ly6459a6kawtcf86jqruodvhy7n85l62mrwo0ew6i35wph4p9cersoqesif76l6dy89u32859kiu2fap9y60s0lc1ppevg82q1hsarwqobecdd61d1o9nf8ohh7bwwag7sca401g00apk0o0oo1ltgvm3fi3lfmrfgoihcywdmlckdwl4qu1ew3mcy3bs40uj5i3r1z1x5sfy7hjzhoy4zy42wuqj5pwoqq9pcea1p2m1pf0x4izhtmjcn4i9vg853eu18xojklsz3wsstqb68jgogopl5wuuqjk3im1jobo49w5w5h9zfsnnsus1la6ykpkoxw7ct81eh6brcx1h4spyr94fbmzivlgw5qtrp2x6bzbdbz08ngx79k4kmemxai4rvuj74faantbjs3slcpxzblchdyh0czb25rt2cf11ilyyfxpmyiwxgwpdf90ow7m7m3qpw4rihw2snkmgf6kpjz35kkmy3ncp8lwt55ojfoeuvw4qq7hflboh7f0pgb4y7l3co674at71kssrq90rz727i7sgoamucdizjta21oepl7gczdb5pbkcgffa8uho6ckcd392iqognwj0n67d3m79oonbco6gfbm63h1abuln4apl7h4j5hx2r1z2iljqcyej8qlufj37du8ymmjtulklz9xeiw32ftxz1lukrv0f86kr4fnawu7emo9b93xz55v2tw2t99hcnuahkropsu5nyd7nddjmjfhgexgklu6aqhvt8ulrina7s27hafphx6xy8ynzcmnrupj4hqfgi 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:18.597 00:23:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:18.893 [2024-12-17 00:23:04.627584] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:18.893 [2024-12-17 00:23:04.627725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72060 ] 00:07:18.893 { 00:07:18.893 "subsystems": [ 00:07:18.893 { 00:07:18.893 "subsystem": "bdev", 00:07:18.893 "config": [ 00:07:18.893 { 00:07:18.893 "params": { 00:07:18.893 "trtype": "pcie", 00:07:18.893 "traddr": "0000:00:10.0", 00:07:18.893 "name": "Nvme0" 00:07:18.893 }, 00:07:18.893 "method": "bdev_nvme_attach_controller" 00:07:18.893 }, 00:07:18.893 { 00:07:18.893 "method": "bdev_wait_for_examine" 00:07:18.893 } 00:07:18.893 ] 00:07:18.893 } 00:07:18.893 ] 00:07:18.893 } 00:07:18.893 [2024-12-17 00:23:04.769036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.893 [2024-12-17 00:23:04.800047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.893 [2024-12-17 00:23:04.827829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.156  [2024-12-17T00:23:05.159Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:19.156 00:07:19.156 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:19.156 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:19.156 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:19.156 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:19.156 { 00:07:19.156 "subsystems": [ 00:07:19.156 { 00:07:19.156 "subsystem": "bdev", 00:07:19.156 "config": [ 00:07:19.156 { 00:07:19.156 "params": { 00:07:19.156 "trtype": "pcie", 00:07:19.156 "traddr": "0000:00:10.0", 00:07:19.156 "name": "Nvme0" 00:07:19.156 }, 00:07:19.156 "method": "bdev_nvme_attach_controller" 00:07:19.156 }, 00:07:19.157 { 00:07:19.157 "method": "bdev_wait_for_examine" 00:07:19.157 } 00:07:19.157 ] 00:07:19.157 } 00:07:19.157 ] 00:07:19.157 } 00:07:19.157 [2024-12-17 00:23:05.101222] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:19.157 [2024-12-17 00:23:05.101396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72074 ] 00:07:19.414 [2024-12-17 00:23:05.238646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.414 [2024-12-17 00:23:05.273501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.414 [2024-12-17 00:23:05.300061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.414  [2024-12-17T00:23:05.676Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:19.673 00:07:19.673 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ vftp8p6gihgjyf1dtnpus186zbzx4f29snjk3kwr432w2gldo3yw5ip6lxqshi8qmqxzwtlrjaubob2x5bvlva1b8fv7ywb1fueykakp3h0r38c4nw3spzl2bg0cpbmsn41u8q1f0ov5tqkdprjeqvpb1iegy6d13lvdh7sl2ndmsqngb9r3qry9ccyplif9jl6gyk0s3gz4lp7bfwjki3bg905ksdxdfpkztc4upu02hu4tpa1cfk5qe7jlptp0ivjlzw58zt9qifbtjgkj4fwcj1txlf4k3anwt3ny5ipzpfom924iy8vhfuxtzy3ra0qqa8pzfm0x56gr52yi71ix0tvntomuuafazzh1whv2tqy2oiszag7qo3boclnu3th7xptyxf55q682esfk79vpoav6fmpdtouqszrpcqxs8od55sauntjzi03qdl3hni8oj7a1qba0u9qt0vs71wscs8brt2h8rcfbze95h3v8w99tj8pmpm8bm9rt6u0q79d9dipgg5j9sofm71st3t2lhdb5pms9dfan93c5n0ejagi0kup04qcl3iwyabwv2qpqecatc7wlkbu5cu0t2btpc0ez4lk9u3jyqaw2ki3661dfxn4yqs14n03nbrh0ao1r5np2yedeub9bzux01mq0cj3gzsjo272gvvba1wj8kva0r69sb6q2insavmgev6tcozmvsllyfmeec9lryrhfew8w5rw7ch9740xhkwh9q58jo7kj3o0dzsqc0rssemyk53xo8v05kypdnm87j8ybcb1751c1aps3koe88su4ku2vto430klxqycy5vbudlpp1nskrodbw26vmghjyf67wta3wrc0woax3x8s3zpb9pd9nm3s942k57nkig98f6jrboxx6p9r49znccginzmzad0pinwqyo764xd2o8skaq4n4mwsn52zj6ua0d3lj7lwfc8uajce24f63r5t1hbsdyvd1i97gdrchfqqdv0v67d58hpelk55591i7ynp66u8jtfzxok2g7g9cemj9ek3cn2fbtw2ne6twbgpa7mej036ki1a7n8ttyjjdkde6jzccg0jf5rp8ifd9w4zqdxvoqcfq5oags2jb6mzt7022057956xcudlwmfpd1f9zrfla4h5oywrx3t4j2gkgn4xc8n3zerbf6s619e7m50r0ybzvtyah4kwrv0c1ni3exnml42zqsdm9395cjwrnhad1maeoc94dw222tlc4frzh2n2uba77l7xkggizem6e1646h9d16jdx9ypfdrmphh2gl9zxmxywuan8qxj4iianhzs7gx5quoglcxz3edma6ekz9ywgw7nb9eq37owpcr9siis1dxf9jas23i6mc8hvzskacit1ey7bo2fin56v2puqk47wx1o1jedyilgseh530u8rxzw5woa901wd7rnlpy2lgpyygfqujrm6hdgnic2gpy664e2l6vitr3o9pthp6lyfjntg1oc2fcfyr423ftf791awir9filtaqlhkqvwiz0f2yfiflq09lxw04ra7c1ej7kp670b9h4gwipjv4ev9hlyjh9k8zj6brgrwkgyhbbukm37n10p0ksqbik1rg9m63tqfb4w7u3694yqyxhpnqpkk8vydrfztu2n8ls24m1f6c0i73o8dgazmwowz4u8hj6gtel4l9flv3qen1p1bh2qyjl2qcdcsojh742lhhmt449aui644meumcxffi7ywfjqtoq45sc6rxnieq01xaorshbp5oglc8se2vusw6wj3wlc4d3tonh17fjp3tnuiictk6ijmc599ljo1cahodtttixhq57qeiupi2v1swtgo3onl566ujc1wrc9ftf73okroif6tto477mncz3lk3oqtiarmggx9byx01vbfdicliqpy0cfjl40w6bc85rnqs1aq16c4ifp6b5o6vf2idl3oflao3ri18rdhjivkjgj8wa2zn3amt45h7sddd1eyvvw29vilbuldml1ke8zb4run5xvpx1kcqvakvkt3sa72etcqvhm5l52jo1zx9aytmb3xqqk02eh89agkn2d069ce7g6ep6zcg9ypfucd6hekf76gt65dvv2f87ur3ita3nce0g5kp6enw0l7evsuwh7h01dqvsl6cugb793twqcddbekbhseyr95u7ejeob1crdtqn4t6wta0y3c8o7ljvwd5xy0j3401bacx9jdlowwnvykzsbefearltldr8ijtt14w0fjkigaubatw8qsnmfre120spp5ff5c32cavpax8qrusnw176tuxayvlx2ahejls0gid54niuqvq9yzcisp21kbhie3csac8i93u1asp24cvmojkapbuwtfdkylug1a7nng99bsh90bvyw9emgdifjix1x0ie7v9upk5hdcuz9280cfm0trteamcy099ifx5jq8pqw7bwwztzd38f75e7ilpamagwyb13r7p7jltfs9s26a40nnfba3bklmfcuwrws4xea7g89zej5kjzfbhqr10muqd6nqr0b3461b884c6ooiy6jg6we9ef074vt3r60v1u23l7apum8ug0ka6endysm74pc2wthetghnsk8shjj8porym1gchozth9gxksgbaksxta2w5uvtayrrhp16jfmdlgsogdmjptqzcw74j994mr3npsoictwu3y3h2t4bm9bk1idygr53oxz1vvuoobeoohiypdo55aqkotlwjfr6vlg5b11ygina4513s3cvcjpzdrgp44to3kob260x2cl9byne33q6pgy1f21y0c5k8sdqizdqegvslbrqyobkdahnhuk43zrfijgqeimplczn96jrf6ggueszkuvgjltd71zu24zucpz0oad2ness5iah5j87hc5w90iemhgvtagvtple421khmk9w3ovql04ymn7ob4q73scs5goopfmjc9x7chgtausf6nmmtdvrurqaxnedbag6mxgp8jongvqrrxeyt42zep206k20i1bjviccrrjyc5q1kwg9rp4jeniifddl6x61tprcepninjrnc0eb0tasot68ps5mjuea4euhxkkw2z9z4unfoxax7o459rv7wzmp544g0x3xbgn7xhbpha63zxycrlp042do7tbskdsrjseg9ca2xp4g9fw3klqyeqxhvbtu7dfhkqv16tcd9da7jeh7f4urzqkil7a0w97axtpuyw5alj8ugtwuthdr3qjz738a40h40uors9xl4o7lcevczcmx4l46ly6459a6kawtcf86jqruodvhy7n85l62mrwo0ew6i35wph4p9cersoqesif76l6dy89u32859kiu2fap9y60s0lc1ppevg82q1hsarwqobecdd61d1o9nf8ohh7bwwag7sca401g00apk0o0oo1ltgvm3fi3lfmrfgoihcywdmlckdwl4qu1ew3mcy3bs40uj5i3r1z1x5sfy7hjzhoy4zy42wuqj5pwoqq9pcea1p2m1pf0x4izhtmjcn4i9vg853eu18xojklsz3wsstqb68jgogopl5wuuqjk3im1jobo49w5w5h9zfsnnsus1la6ykpkoxw7ct81eh6brcx1h4spyr94fbmzivlgw5qtrp2x6bzbdbz08ngx79k4kmemxai4rvuj74faantbjs3slcpxzblchdyh0czb25rt2cf11ilyyfxpmyiwxgwpdf90ow7m7m3qpw4rihw2snkmgf6kpjz35kkmy3ncp8lwt55ojfoeuvw4qq7hflboh7f0pgb4y7l3co674at71kssrq90rz727i7sgoamucdizjta21oepl7gczdb5pbkcgffa8uho6ckcd392iqognwj0n67d3m79oonbco6gfbm63h1abuln4apl7h4j5hx2r1z2iljqcyej8qlufj37du8ymmjtulklz9xeiw32ftxz1lukrv0f86kr4fnawu7emo9b93xz55v2tw2t99hcnuahkropsu5nyd7nddjmjfhgexgklu6aqhvt8ulrina7s27hafphx6xy8ynzcmnrupj4hqfgi == \v\f\t\p\8\p\6\g\i\h\g\j\y\f\1\d\t\n\p\u\s\1\8\6\z\b\z\x\4\f\2\9\s\n\j\k\3\k\w\r\4\3\2\w\2\g\l\d\o\3\y\w\5\i\p\6\l\x\q\s\h\i\8\q\m\q\x\z\w\t\l\r\j\a\u\b\o\b\2\x\5\b\v\l\v\a\1\b\8\f\v\7\y\w\b\1\f\u\e\y\k\a\k\p\3\h\0\r\3\8\c\4\n\w\3\s\p\z\l\2\b\g\0\c\p\b\m\s\n\4\1\u\8\q\1\f\0\o\v\5\t\q\k\d\p\r\j\e\q\v\p\b\1\i\e\g\y\6\d\1\3\l\v\d\h\7\s\l\2\n\d\m\s\q\n\g\b\9\r\3\q\r\y\9\c\c\y\p\l\i\f\9\j\l\6\g\y\k\0\s\3\g\z\4\l\p\7\b\f\w\j\k\i\3\b\g\9\0\5\k\s\d\x\d\f\p\k\z\t\c\4\u\p\u\0\2\h\u\4\t\p\a\1\c\f\k\5\q\e\7\j\l\p\t\p\0\i\v\j\l\z\w\5\8\z\t\9\q\i\f\b\t\j\g\k\j\4\f\w\c\j\1\t\x\l\f\4\k\3\a\n\w\t\3\n\y\5\i\p\z\p\f\o\m\9\2\4\i\y\8\v\h\f\u\x\t\z\y\3\r\a\0\q\q\a\8\p\z\f\m\0\x\5\6\g\r\5\2\y\i\7\1\i\x\0\t\v\n\t\o\m\u\u\a\f\a\z\z\h\1\w\h\v\2\t\q\y\2\o\i\s\z\a\g\7\q\o\3\b\o\c\l\n\u\3\t\h\7\x\p\t\y\x\f\5\5\q\6\8\2\e\s\f\k\7\9\v\p\o\a\v\6\f\m\p\d\t\o\u\q\s\z\r\p\c\q\x\s\8\o\d\5\5\s\a\u\n\t\j\z\i\0\3\q\d\l\3\h\n\i\8\o\j\7\a\1\q\b\a\0\u\9\q\t\0\v\s\7\1\w\s\c\s\8\b\r\t\2\h\8\r\c\f\b\z\e\9\5\h\3\v\8\w\9\9\t\j\8\p\m\p\m\8\b\m\9\r\t\6\u\0\q\7\9\d\9\d\i\p\g\g\5\j\9\s\o\f\m\7\1\s\t\3\t\2\l\h\d\b\5\p\m\s\9\d\f\a\n\9\3\c\5\n\0\e\j\a\g\i\0\k\u\p\0\4\q\c\l\3\i\w\y\a\b\w\v\2\q\p\q\e\c\a\t\c\7\w\l\k\b\u\5\c\u\0\t\2\b\t\p\c\0\e\z\4\l\k\9\u\3\j\y\q\a\w\2\k\i\3\6\6\1\d\f\x\n\4\y\q\s\1\4\n\0\3\n\b\r\h\0\a\o\1\r\5\n\p\2\y\e\d\e\u\b\9\b\z\u\x\0\1\m\q\0\c\j\3\g\z\s\j\o\2\7\2\g\v\v\b\a\1\w\j\8\k\v\a\0\r\6\9\s\b\6\q\2\i\n\s\a\v\m\g\e\v\6\t\c\o\z\m\v\s\l\l\y\f\m\e\e\c\9\l\r\y\r\h\f\e\w\8\w\5\r\w\7\c\h\9\7\4\0\x\h\k\w\h\9\q\5\8\j\o\7\k\j\3\o\0\d\z\s\q\c\0\r\s\s\e\m\y\k\5\3\x\o\8\v\0\5\k\y\p\d\n\m\8\7\j\8\y\b\c\b\1\7\5\1\c\1\a\p\s\3\k\o\e\8\8\s\u\4\k\u\2\v\t\o\4\3\0\k\l\x\q\y\c\y\5\v\b\u\d\l\p\p\1\n\s\k\r\o\d\b\w\2\6\v\m\g\h\j\y\f\6\7\w\t\a\3\w\r\c\0\w\o\a\x\3\x\8\s\3\z\p\b\9\p\d\9\n\m\3\s\9\4\2\k\5\7\n\k\i\g\9\8\f\6\j\r\b\o\x\x\6\p\9\r\4\9\z\n\c\c\g\i\n\z\m\z\a\d\0\p\i\n\w\q\y\o\7\6\4\x\d\2\o\8\s\k\a\q\4\n\4\m\w\s\n\5\2\z\j\6\u\a\0\d\3\l\j\7\l\w\f\c\8\u\a\j\c\e\2\4\f\6\3\r\5\t\1\h\b\s\d\y\v\d\1\i\9\7\g\d\r\c\h\f\q\q\d\v\0\v\6\7\d\5\8\h\p\e\l\k\5\5\5\9\1\i\7\y\n\p\6\6\u\8\j\t\f\z\x\o\k\2\g\7\g\9\c\e\m\j\9\e\k\3\c\n\2\f\b\t\w\2\n\e\6\t\w\b\g\p\a\7\m\e\j\0\3\6\k\i\1\a\7\n\8\t\t\y\j\j\d\k\d\e\6\j\z\c\c\g\0\j\f\5\r\p\8\i\f\d\9\w\4\z\q\d\x\v\o\q\c\f\q\5\o\a\g\s\2\j\b\6\m\z\t\7\0\2\2\0\5\7\9\5\6\x\c\u\d\l\w\m\f\p\d\1\f\9\z\r\f\l\a\4\h\5\o\y\w\r\x\3\t\4\j\2\g\k\g\n\4\x\c\8\n\3\z\e\r\b\f\6\s\6\1\9\e\7\m\5\0\r\0\y\b\z\v\t\y\a\h\4\k\w\r\v\0\c\1\n\i\3\e\x\n\m\l\4\2\z\q\s\d\m\9\3\9\5\c\j\w\r\n\h\a\d\1\m\a\e\o\c\9\4\d\w\2\2\2\t\l\c\4\f\r\z\h\2\n\2\u\b\a\7\7\l\7\x\k\g\g\i\z\e\m\6\e\1\6\4\6\h\9\d\1\6\j\d\x\9\y\p\f\d\r\m\p\h\h\2\g\l\9\z\x\m\x\y\w\u\a\n\8\q\x\j\4\i\i\a\n\h\z\s\7\g\x\5\q\u\o\g\l\c\x\z\3\e\d\m\a\6\e\k\z\9\y\w\g\w\7\n\b\9\e\q\3\7\o\w\p\c\r\9\s\i\i\s\1\d\x\f\9\j\a\s\2\3\i\6\m\c\8\h\v\z\s\k\a\c\i\t\1\e\y\7\b\o\2\f\i\n\5\6\v\2\p\u\q\k\4\7\w\x\1\o\1\j\e\d\y\i\l\g\s\e\h\5\3\0\u\8\r\x\z\w\5\w\o\a\9\0\1\w\d\7\r\n\l\p\y\2\l\g\p\y\y\g\f\q\u\j\r\m\6\h\d\g\n\i\c\2\g\p\y\6\6\4\e\2\l\6\v\i\t\r\3\o\9\p\t\h\p\6\l\y\f\j\n\t\g\1\o\c\2\f\c\f\y\r\4\2\3\f\t\f\7\9\1\a\w\i\r\9\f\i\l\t\a\q\l\h\k\q\v\w\i\z\0\f\2\y\f\i\f\l\q\0\9\l\x\w\0\4\r\a\7\c\1\e\j\7\k\p\6\7\0\b\9\h\4\g\w\i\p\j\v\4\e\v\9\h\l\y\j\h\9\k\8\z\j\6\b\r\g\r\w\k\g\y\h\b\b\u\k\m\3\7\n\1\0\p\0\k\s\q\b\i\k\1\r\g\9\m\6\3\t\q\f\b\4\w\7\u\3\6\9\4\y\q\y\x\h\p\n\q\p\k\k\8\v\y\d\r\f\z\t\u\2\n\8\l\s\2\4\m\1\f\6\c\0\i\7\3\o\8\d\g\a\z\m\w\o\w\z\4\u\8\h\j\6\g\t\e\l\4\l\9\f\l\v\3\q\e\n\1\p\1\b\h\2\q\y\j\l\2\q\c\d\c\s\o\j\h\7\4\2\l\h\h\m\t\4\4\9\a\u\i\6\4\4\m\e\u\m\c\x\f\f\i\7\y\w\f\j\q\t\o\q\4\5\s\c\6\r\x\n\i\e\q\0\1\x\a\o\r\s\h\b\p\5\o\g\l\c\8\s\e\2\v\u\s\w\6\w\j\3\w\l\c\4\d\3\t\o\n\h\1\7\f\j\p\3\t\n\u\i\i\c\t\k\6\i\j\m\c\5\9\9\l\j\o\1\c\a\h\o\d\t\t\t\i\x\h\q\5\7\q\e\i\u\p\i\2\v\1\s\w\t\g\o\3\o\n\l\5\6\6\u\j\c\1\w\r\c\9\f\t\f\7\3\o\k\r\o\i\f\6\t\t\o\4\7\7\m\n\c\z\3\l\k\3\o\q\t\i\a\r\m\g\g\x\9\b\y\x\0\1\v\b\f\d\i\c\l\i\q\p\y\0\c\f\j\l\4\0\w\6\b\c\8\5\r\n\q\s\1\a\q\1\6\c\4\i\f\p\6\b\5\o\6\v\f\2\i\d\l\3\o\f\l\a\o\3\r\i\1\8\r\d\h\j\i\v\k\j\g\j\8\w\a\2\z\n\3\a\m\t\4\5\h\7\s\d\d\d\1\e\y\v\v\w\2\9\v\i\l\b\u\l\d\m\l\1\k\e\8\z\b\4\r\u\n\5\x\v\p\x\1\k\c\q\v\a\k\v\k\t\3\s\a\7\2\e\t\c\q\v\h\m\5\l\5\2\j\o\1\z\x\9\a\y\t\m\b\3\x\q\q\k\0\2\e\h\8\9\a\g\k\n\2\d\0\6\9\c\e\7\g\6\e\p\6\z\c\g\9\y\p\f\u\c\d\6\h\e\k\f\7\6\g\t\6\5\d\v\v\2\f\8\7\u\r\3\i\t\a\3\n\c\e\0\g\5\k\p\6\e\n\w\0\l\7\e\v\s\u\w\h\7\h\0\1\d\q\v\s\l\6\c\u\g\b\7\9\3\t\w\q\c\d\d\b\e\k\b\h\s\e\y\r\9\5\u\7\e\j\e\o\b\1\c\r\d\t\q\n\4\t\6\w\t\a\0\y\3\c\8\o\7\l\j\v\w\d\5\x\y\0\j\3\4\0\1\b\a\c\x\9\j\d\l\o\w\w\n\v\y\k\z\s\b\e\f\e\a\r\l\t\l\d\r\8\i\j\t\t\1\4\w\0\f\j\k\i\g\a\u\b\a\t\w\8\q\s\n\m\f\r\e\1\2\0\s\p\p\5\f\f\5\c\3\2\c\a\v\p\a\x\8\q\r\u\s\n\w\1\7\6\t\u\x\a\y\v\l\x\2\a\h\e\j\l\s\0\g\i\d\5\4\n\i\u\q\v\q\9\y\z\c\i\s\p\2\1\k\b\h\i\e\3\c\s\a\c\8\i\9\3\u\1\a\s\p\2\4\c\v\m\o\j\k\a\p\b\u\w\t\f\d\k\y\l\u\g\1\a\7\n\n\g\9\9\b\s\h\9\0\b\v\y\w\9\e\m\g\d\i\f\j\i\x\1\x\0\i\e\7\v\9\u\p\k\5\h\d\c\u\z\9\2\8\0\c\f\m\0\t\r\t\e\a\m\c\y\0\9\9\i\f\x\5\j\q\8\p\q\w\7\b\w\w\z\t\z\d\3\8\f\7\5\e\7\i\l\p\a\m\a\g\w\y\b\1\3\r\7\p\7\j\l\t\f\s\9\s\2\6\a\4\0\n\n\f\b\a\3\b\k\l\m\f\c\u\w\r\w\s\4\x\e\a\7\g\8\9\z\e\j\5\k\j\z\f\b\h\q\r\1\0\m\u\q\d\6\n\q\r\0\b\3\4\6\1\b\8\8\4\c\6\o\o\i\y\6\j\g\6\w\e\9\e\f\0\7\4\v\t\3\r\6\0\v\1\u\2\3\l\7\a\p\u\m\8\u\g\0\k\a\6\e\n\d\y\s\m\7\4\p\c\2\w\t\h\e\t\g\h\n\s\k\8\s\h\j\j\8\p\o\r\y\m\1\g\c\h\o\z\t\h\9\g\x\k\s\g\b\a\k\s\x\t\a\2\w\5\u\v\t\a\y\r\r\h\p\1\6\j\f\m\d\l\g\s\o\g\d\m\j\p\t\q\z\c\w\7\4\j\9\9\4\m\r\3\n\p\s\o\i\c\t\w\u\3\y\3\h\2\t\4\b\m\9\b\k\1\i\d\y\g\r\5\3\o\x\z\1\v\v\u\o\o\b\e\o\o\h\i\y\p\d\o\5\5\a\q\k\o\t\l\w\j\f\r\6\v\l\g\5\b\1\1\y\g\i\n\a\4\5\1\3\s\3\c\v\c\j\p\z\d\r\g\p\4\4\t\o\3\k\o\b\2\6\0\x\2\c\l\9\b\y\n\e\3\3\q\6\p\g\y\1\f\2\1\y\0\c\5\k\8\s\d\q\i\z\d\q\e\g\v\s\l\b\r\q\y\o\b\k\d\a\h\n\h\u\k\4\3\z\r\f\i\j\g\q\e\i\m\p\l\c\z\n\9\6\j\r\f\6\g\g\u\e\s\z\k\u\v\g\j\l\t\d\7\1\z\u\2\4\z\u\c\p\z\0\o\a\d\2\n\e\s\s\5\i\a\h\5\j\8\7\h\c\5\w\9\0\i\e\m\h\g\v\t\a\g\v\t\p\l\e\4\2\1\k\h\m\k\9\w\3\o\v\q\l\0\4\y\m\n\7\o\b\4\q\7\3\s\c\s\5\g\o\o\p\f\m\j\c\9\x\7\c\h\g\t\a\u\s\f\6\n\m\m\t\d\v\r\u\r\q\a\x\n\e\d\b\a\g\6\m\x\g\p\8\j\o\n\g\v\q\r\r\x\e\y\t\4\2\z\e\p\2\0\6\k\2\0\i\1\b\j\v\i\c\c\r\r\j\y\c\5\q\1\k\w\g\9\r\p\4\j\e\n\i\i\f\d\d\l\6\x\6\1\t\p\r\c\e\p\n\i\n\j\r\n\c\0\e\b\0\t\a\s\o\t\6\8\p\s\5\m\j\u\e\a\4\e\u\h\x\k\k\w\2\z\9\z\4\u\n\f\o\x\a\x\7\o\4\5\9\r\v\7\w\z\m\p\5\4\4\g\0\x\3\x\b\g\n\7\x\h\b\p\h\a\6\3\z\x\y\c\r\l\p\0\4\2\d\o\7\t\b\s\k\d\s\r\j\s\e\g\9\c\a\2\x\p\4\g\9\f\w\3\k\l\q\y\e\q\x\h\v\b\t\u\7\d\f\h\k\q\v\1\6\t\c\d\9\d\a\7\j\e\h\7\f\4\u\r\z\q\k\i\l\7\a\0\w\9\7\a\x\t\p\u\y\w\5\a\l\j\8\u\g\t\w\u\t\h\d\r\3\q\j\z\7\3\8\a\4\0\h\4\0\u\o\r\s\9\x\l\4\o\7\l\c\e\v\c\z\c\m\x\4\l\4\6\l\y\6\4\5\9\a\6\k\a\w\t\c\f\8\6\j\q\r\u\o\d\v\h\y\7\n\8\5\l\6\2\m\r\w\o\0\e\w\6\i\3\5\w\p\h\4\p\9\c\e\r\s\o\q\e\s\i\f\7\6\l\6\d\y\8\9\u\3\2\8\5\9\k\i\u\2\f\a\p\9\y\6\0\s\0\l\c\1\p\p\e\v\g\8\2\q\1\h\s\a\r\w\q\o\b\e\c\d\d\6\1\d\1\o\9\n\f\8\o\h\h\7\b\w\w\a\g\7\s\c\a\4\0\1\g\0\0\a\p\k\0\o\0\o\o\1\l\t\g\v\m\3\f\i\3\l\f\m\r\f\g\o\i\h\c\y\w\d\m\l\c\k\d\w\l\4\q\u\1\e\w\3\m\c\y\3\b\s\4\0\u\j\5\i\3\r\1\z\1\x\5\s\f\y\7\h\j\z\h\o\y\4\z\y\4\2\w\u\q\j\5\p\w\o\q\q\9\p\c\e\a\1\p\2\m\1\p\f\0\x\4\i\z\h\t\m\j\c\n\4\i\9\v\g\8\5\3\e\u\1\8\x\o\j\k\l\s\z\3\w\s\s\t\q\b\6\8\j\g\o\g\o\p\l\5\w\u\u\q\j\k\3\i\m\1\j\o\b\o\4\9\w\5\w\5\h\9\z\f\s\n\n\s\u\s\1\l\a\6\y\k\p\k\o\x\w\7\c\t\8\1\e\h\6\b\r\c\x\1\h\4\s\p\y\r\9\4\f\b\m\z\i\v\l\g\w\5\q\t\r\p\2\x\6\b\z\b\d\b\z\0\8\n\g\x\7\9\k\4\k\m\e\m\x\a\i\4\r\v\u\j\7\4\f\a\a\n\t\b\j\s\3\s\l\c\p\x\z\b\l\c\h\d\y\h\0\c\z\b\2\5\r\t\2\c\f\1\1\i\l\y\y\f\x\p\m\y\i\w\x\g\w\p\d\f\9\0\o\w\7\m\7\m\3\q\p\w\4\r\i\h\w\2\s\n\k\m\g\f\6\k\p\j\z\3\5\k\k\m\y\3\n\c\p\8\l\w\t\5\5\o\j\f\o\e\u\v\w\4\q\q\7\h\f\l\b\o\h\7\f\0\p\g\b\4\y\7\l\3\c\o\6\7\4\a\t\7\1\k\s\s\r\q\9\0\r\z\7\2\7\i\7\s\g\o\a\m\u\c\d\i\z\j\t\a\2\1\o\e\p\l\7\g\c\z\d\b\5\p\b\k\c\g\f\f\a\8\u\h\o\6\c\k\c\d\3\9\2\i\q\o\g\n\w\j\0\n\6\7\d\3\m\7\9\o\o\n\b\c\o\6\g\f\b\m\6\3\h\1\a\b\u\l\n\4\a\p\l\7\h\4\j\5\h\x\2\r\1\z\2\i\l\j\q\c\y\e\j\8\q\l\u\f\j\3\7\d\u\8\y\m\m\j\t\u\l\k\l\z\9\x\e\i\w\3\2\f\t\x\z\1\l\u\k\r\v\0\f\8\6\k\r\4\f\n\a\w\u\7\e\m\o\9\b\9\3\x\z\5\5\v\2\t\w\2\t\9\9\h\c\n\u\a\h\k\r\o\p\s\u\5\n\y\d\7\n\d\d\j\m\j\f\h\g\e\x\g\k\l\u\6\a\q\h\v\t\8\u\l\r\i\n\a\7\s\2\7\h\a\f\p\h\x\6\x\y\8\y\n\z\c\m\n\r\u\p\j\4\h\q\f\g\i ]] 00:07:19.674 00:07:19.674 real 0m1.038s 00:07:19.674 user 0m0.740s 00:07:19.674 sys 0m0.392s 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:19.674 ************************************ 00:07:19.674 END TEST dd_rw_offset 00:07:19.674 ************************************ 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.674 00:23:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.674 [2024-12-17 00:23:05.639876] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:19.674 [2024-12-17 00:23:05.639996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72106 ] 00:07:19.674 { 00:07:19.674 "subsystems": [ 00:07:19.674 { 00:07:19.674 "subsystem": "bdev", 00:07:19.674 "config": [ 00:07:19.674 { 00:07:19.674 "params": { 00:07:19.674 "trtype": "pcie", 00:07:19.674 "traddr": "0000:00:10.0", 00:07:19.674 "name": "Nvme0" 00:07:19.674 }, 00:07:19.674 "method": "bdev_nvme_attach_controller" 00:07:19.674 }, 00:07:19.674 { 00:07:19.674 "method": "bdev_wait_for_examine" 00:07:19.674 } 00:07:19.674 ] 00:07:19.674 } 00:07:19.674 ] 00:07:19.674 } 00:07:19.934 [2024-12-17 00:23:05.777006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.934 [2024-12-17 00:23:05.807827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.934 [2024-12-17 00:23:05.836239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.934  [2024-12-17T00:23:06.195Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:20.193 00:07:20.193 00:23:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.193 ************************************ 00:07:20.193 END TEST spdk_dd_basic_rw 00:07:20.193 ************************************ 00:07:20.193 00:07:20.193 real 0m14.084s 00:07:20.193 user 0m10.150s 00:07:20.193 sys 0m4.510s 00:07:20.193 00:23:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.193 00:23:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.193 00:23:06 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:20.193 00:23:06 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.193 00:23:06 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.193 00:23:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:20.193 ************************************ 00:07:20.193 START TEST spdk_dd_posix 00:07:20.193 ************************************ 00:07:20.193 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:20.452 * Looking for test storage... 00:07:20.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.452 --rc genhtml_branch_coverage=1 00:07:20.452 --rc genhtml_function_coverage=1 00:07:20.452 --rc genhtml_legend=1 00:07:20.452 --rc geninfo_all_blocks=1 00:07:20.452 --rc geninfo_unexecuted_blocks=1 00:07:20.452 00:07:20.452 ' 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.452 --rc genhtml_branch_coverage=1 00:07:20.452 --rc genhtml_function_coverage=1 00:07:20.452 --rc genhtml_legend=1 00:07:20.452 --rc geninfo_all_blocks=1 00:07:20.452 --rc geninfo_unexecuted_blocks=1 00:07:20.452 00:07:20.452 ' 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.452 --rc genhtml_branch_coverage=1 00:07:20.452 --rc genhtml_function_coverage=1 00:07:20.452 --rc genhtml_legend=1 00:07:20.452 --rc geninfo_all_blocks=1 00:07:20.452 --rc geninfo_unexecuted_blocks=1 00:07:20.452 00:07:20.452 ' 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.452 --rc genhtml_branch_coverage=1 00:07:20.452 --rc genhtml_function_coverage=1 00:07:20.452 --rc genhtml_legend=1 00:07:20.452 --rc geninfo_all_blocks=1 00:07:20.452 --rc geninfo_unexecuted_blocks=1 00:07:20.452 00:07:20.452 ' 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.452 00:23:06 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:20.453 * First test run, liburing in use 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:20.453 ************************************ 00:07:20.453 START TEST dd_flag_append 00:07:20.453 ************************************ 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=g6aa31oshd1n9q6nqdt9zrxq51ep142v 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=uk99b4hntl936ddwq68dls21c0xaf414 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s g6aa31oshd1n9q6nqdt9zrxq51ep142v 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s uk99b4hntl936ddwq68dls21c0xaf414 00:07:20.453 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:20.453 [2024-12-17 00:23:06.379751] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.453 [2024-12-17 00:23:06.379991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72173 ] 00:07:20.712 [2024-12-17 00:23:06.519652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.712 [2024-12-17 00:23:06.560062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.712 [2024-12-17 00:23:06.591952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.712  [2024-12-17T00:23:06.975Z] Copying: 32/32 [B] (average 31 kBps) 00:07:20.972 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ uk99b4hntl936ddwq68dls21c0xaf414g6aa31oshd1n9q6nqdt9zrxq51ep142v == \u\k\9\9\b\4\h\n\t\l\9\3\6\d\d\w\q\6\8\d\l\s\2\1\c\0\x\a\f\4\1\4\g\6\a\a\3\1\o\s\h\d\1\n\9\q\6\n\q\d\t\9\z\r\x\q\5\1\e\p\1\4\2\v ]] 00:07:20.972 00:07:20.972 real 0m0.430s 00:07:20.972 user 0m0.222s 00:07:20.972 sys 0m0.182s 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.972 ************************************ 00:07:20.972 END TEST dd_flag_append 00:07:20.972 ************************************ 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:20.972 ************************************ 00:07:20.972 START TEST dd_flag_directory 00:07:20.972 ************************************ 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.972 00:23:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.972 [2024-12-17 00:23:06.860702] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.972 [2024-12-17 00:23:06.860968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72201 ] 00:07:21.231 [2024-12-17 00:23:07.000062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.231 [2024-12-17 00:23:07.040559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.231 [2024-12-17 00:23:07.072758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.231 [2024-12-17 00:23:07.089731] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:21.231 [2024-12-17 00:23:07.090063] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:21.231 [2024-12-17 00:23:07.090104] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.231 [2024-12-17 00:23:07.153674] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.231 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.232 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.232 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.232 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.232 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.232 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.232 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.232 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:21.491 [2024-12-17 00:23:07.269540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:21.491 [2024-12-17 00:23:07.269773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72211 ] 00:07:21.491 [2024-12-17 00:23:07.399702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.491 [2024-12-17 00:23:07.433235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.491 [2024-12-17 00:23:07.462656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.491 [2024-12-17 00:23:07.477491] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:21.491 [2024-12-17 00:23:07.477536] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:21.491 [2024-12-17 00:23:07.477563] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.750 [2024-12-17 00:23:07.534940] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:21.750 ************************************ 00:07:21.750 END TEST dd_flag_directory 00:07:21.750 ************************************ 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.750 00:07:21.750 real 0m0.799s 00:07:21.750 user 0m0.372s 00:07:21.750 sys 0m0.218s 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:21.750 ************************************ 00:07:21.750 START TEST dd_flag_nofollow 00:07:21.750 ************************************ 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.750 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.751 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.751 00:23:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.751 [2024-12-17 00:23:07.706620] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:21.751 [2024-12-17 00:23:07.706704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72238 ] 00:07:22.010 [2024-12-17 00:23:07.833725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.010 [2024-12-17 00:23:07.865763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.010 [2024-12-17 00:23:07.891536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.010 [2024-12-17 00:23:07.905530] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:22.010 [2024-12-17 00:23:07.905582] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:22.010 [2024-12-17 00:23:07.905611] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.010 [2024-12-17 00:23:07.960837] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.270 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.271 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.271 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.271 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.271 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.271 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.271 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:22.271 [2024-12-17 00:23:08.087999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.271 [2024-12-17 00:23:08.088301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72249 ] 00:07:22.271 [2024-12-17 00:23:08.224696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.271 [2024-12-17 00:23:08.259220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.533 [2024-12-17 00:23:08.289160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.533 [2024-12-17 00:23:08.304099] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:22.533 [2024-12-17 00:23:08.304167] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:22.533 [2024-12-17 00:23:08.304196] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.533 [2024-12-17 00:23:08.360612] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:22.533 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.533 [2024-12-17 00:23:08.488431] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.533 [2024-12-17 00:23:08.488527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72251 ] 00:07:22.793 [2024-12-17 00:23:08.624178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.793 [2024-12-17 00:23:08.654779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.793 [2024-12-17 00:23:08.681767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.793  [2024-12-17T00:23:09.055Z] Copying: 512/512 [B] (average 500 kBps) 00:07:23.052 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ dvms1joyfcurdm3c9uq77och2pbmr9libm1zvn9jblygn38lxscuosh9v9rc767z00j5evpz3lz9w85w74xb4ye5oa95fl4lgds7ryzhkbxj2livrv6shfqt20buvkfvuv6nezrwfxxei7umf9l1cc9l70hpolz1vj7c9kk1lg7zu7ef6jxp073xfzpjs7efa5c6pdc9xzrb9m2mmkgafjp7m0ds30kemqxwyrxddlblysqsb7xtu8qri0evx6w8qlrmqn9c6ilahwvhblhrl90gez16qfjrx27kp7gupx2hely2fcssamyavez58d6esy9w8qkdrxe9ewcjtuzyomk2xn6svkewrz2zynx4r4z9g4444grbt7p3ya2nsn7yyqe811rh305eurcf0dbicrc0n03mc7xrvi5v8xffe0dvbvuksfbaqm2ahzba5dbbv5a8hrnh58r44ygmrwouz5dgwsaoockeutmaxd9v9lzp0z71jzti90dnca2bammu == \d\v\m\s\1\j\o\y\f\c\u\r\d\m\3\c\9\u\q\7\7\o\c\h\2\p\b\m\r\9\l\i\b\m\1\z\v\n\9\j\b\l\y\g\n\3\8\l\x\s\c\u\o\s\h\9\v\9\r\c\7\6\7\z\0\0\j\5\e\v\p\z\3\l\z\9\w\8\5\w\7\4\x\b\4\y\e\5\o\a\9\5\f\l\4\l\g\d\s\7\r\y\z\h\k\b\x\j\2\l\i\v\r\v\6\s\h\f\q\t\2\0\b\u\v\k\f\v\u\v\6\n\e\z\r\w\f\x\x\e\i\7\u\m\f\9\l\1\c\c\9\l\7\0\h\p\o\l\z\1\v\j\7\c\9\k\k\1\l\g\7\z\u\7\e\f\6\j\x\p\0\7\3\x\f\z\p\j\s\7\e\f\a\5\c\6\p\d\c\9\x\z\r\b\9\m\2\m\m\k\g\a\f\j\p\7\m\0\d\s\3\0\k\e\m\q\x\w\y\r\x\d\d\l\b\l\y\s\q\s\b\7\x\t\u\8\q\r\i\0\e\v\x\6\w\8\q\l\r\m\q\n\9\c\6\i\l\a\h\w\v\h\b\l\h\r\l\9\0\g\e\z\1\6\q\f\j\r\x\2\7\k\p\7\g\u\p\x\2\h\e\l\y\2\f\c\s\s\a\m\y\a\v\e\z\5\8\d\6\e\s\y\9\w\8\q\k\d\r\x\e\9\e\w\c\j\t\u\z\y\o\m\k\2\x\n\6\s\v\k\e\w\r\z\2\z\y\n\x\4\r\4\z\9\g\4\4\4\4\g\r\b\t\7\p\3\y\a\2\n\s\n\7\y\y\q\e\8\1\1\r\h\3\0\5\e\u\r\c\f\0\d\b\i\c\r\c\0\n\0\3\m\c\7\x\r\v\i\5\v\8\x\f\f\e\0\d\v\b\v\u\k\s\f\b\a\q\m\2\a\h\z\b\a\5\d\b\b\v\5\a\8\h\r\n\h\5\8\r\4\4\y\g\m\r\w\o\u\z\5\d\g\w\s\a\o\o\c\k\e\u\t\m\a\x\d\9\v\9\l\z\p\0\z\7\1\j\z\t\i\9\0\d\n\c\a\2\b\a\m\m\u ]] 00:07:23.052 00:07:23.052 real 0m1.165s 00:07:23.052 user 0m0.562s 00:07:23.052 sys 0m0.355s 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 ************************************ 00:07:23.052 END TEST dd_flag_nofollow 00:07:23.052 ************************************ 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 ************************************ 00:07:23.052 START TEST dd_flag_noatime 00:07:23.052 ************************************ 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1734394988 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1734394988 00:07:23.052 00:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:23.989 00:23:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.989 [2024-12-17 00:23:09.949301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:23.989 [2024-12-17 00:23:09.949432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72293 ] 00:07:24.248 [2024-12-17 00:23:10.086628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.248 [2024-12-17 00:23:10.128767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.248 [2024-12-17 00:23:10.163636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.248  [2024-12-17T00:23:10.510Z] Copying: 512/512 [B] (average 500 kBps) 00:07:24.507 00:07:24.507 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.507 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1734394988 )) 00:07:24.507 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.507 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1734394988 )) 00:07:24.507 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.507 [2024-12-17 00:23:10.375185] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:24.507 [2024-12-17 00:23:10.375471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72307 ] 00:07:24.767 [2024-12-17 00:23:10.511893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.767 [2024-12-17 00:23:10.551651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.767 [2024-12-17 00:23:10.581057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.767  [2024-12-17T00:23:10.770Z] Copying: 512/512 [B] (average 500 kBps) 00:07:24.767 00:07:24.767 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.767 ************************************ 00:07:24.767 END TEST dd_flag_noatime 00:07:24.767 ************************************ 00:07:24.767 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1734394990 )) 00:07:24.767 00:07:24.767 real 0m1.854s 00:07:24.767 user 0m0.414s 00:07:24.767 sys 0m0.379s 00:07:24.767 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.767 00:23:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:25.026 ************************************ 00:07:25.026 START TEST dd_flags_misc 00:07:25.026 ************************************ 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.026 00:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:25.026 [2024-12-17 00:23:10.848524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:25.026 [2024-12-17 00:23:10.848680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72334 ] 00:07:25.026 [2024-12-17 00:23:10.985538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.026 [2024-12-17 00:23:11.017522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.286 [2024-12-17 00:23:11.046381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.286  [2024-12-17T00:23:11.289Z] Copying: 512/512 [B] (average 500 kBps) 00:07:25.286 00:07:25.286 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ djhsmxs9knh0aleohlr6i63mh9ekid10ubpbb10u78ypffd8x37adsn8c8x7854qnhvo4k4kx5h1q91xjvdjle5wb0v8ma7t9enzz0smaqepp9npkl4vf2liifftxzdczdm8vhqnogd2gy4xpzt9mey6ijg06rwvaqwj3gk7npz147rt75pivsq5t18c1apbmvnc31ort0qpb6r0dzaaqzn8qmdt3nolujr4re7nocxzmwmmj8gl2k5qet3ms48v8o2kjmsfcfxypca5m4uwqn8vz8vst4l4hiy3gahgw2ptv9gbeuu1mw3fnnpmimqmnflp3pc47ro03h7tb3xaxnei4w6f2zgpl339q9k74sed1w2ute1konkwjpzq4aokuyvuua8j053kemyosl9r9v5209bckd2gn037rv7krzk4drqwipuz0uly61ve1wt30gh6a572fzf3a5byvjmd03cwpw6aedjqeajfzemn9xo6mshk7291n63ukzf8kjwz == \d\j\h\s\m\x\s\9\k\n\h\0\a\l\e\o\h\l\r\6\i\6\3\m\h\9\e\k\i\d\1\0\u\b\p\b\b\1\0\u\7\8\y\p\f\f\d\8\x\3\7\a\d\s\n\8\c\8\x\7\8\5\4\q\n\h\v\o\4\k\4\k\x\5\h\1\q\9\1\x\j\v\d\j\l\e\5\w\b\0\v\8\m\a\7\t\9\e\n\z\z\0\s\m\a\q\e\p\p\9\n\p\k\l\4\v\f\2\l\i\i\f\f\t\x\z\d\c\z\d\m\8\v\h\q\n\o\g\d\2\g\y\4\x\p\z\t\9\m\e\y\6\i\j\g\0\6\r\w\v\a\q\w\j\3\g\k\7\n\p\z\1\4\7\r\t\7\5\p\i\v\s\q\5\t\1\8\c\1\a\p\b\m\v\n\c\3\1\o\r\t\0\q\p\b\6\r\0\d\z\a\a\q\z\n\8\q\m\d\t\3\n\o\l\u\j\r\4\r\e\7\n\o\c\x\z\m\w\m\m\j\8\g\l\2\k\5\q\e\t\3\m\s\4\8\v\8\o\2\k\j\m\s\f\c\f\x\y\p\c\a\5\m\4\u\w\q\n\8\v\z\8\v\s\t\4\l\4\h\i\y\3\g\a\h\g\w\2\p\t\v\9\g\b\e\u\u\1\m\w\3\f\n\n\p\m\i\m\q\m\n\f\l\p\3\p\c\4\7\r\o\0\3\h\7\t\b\3\x\a\x\n\e\i\4\w\6\f\2\z\g\p\l\3\3\9\q\9\k\7\4\s\e\d\1\w\2\u\t\e\1\k\o\n\k\w\j\p\z\q\4\a\o\k\u\y\v\u\u\a\8\j\0\5\3\k\e\m\y\o\s\l\9\r\9\v\5\2\0\9\b\c\k\d\2\g\n\0\3\7\r\v\7\k\r\z\k\4\d\r\q\w\i\p\u\z\0\u\l\y\6\1\v\e\1\w\t\3\0\g\h\6\a\5\7\2\f\z\f\3\a\5\b\y\v\j\m\d\0\3\c\w\p\w\6\a\e\d\j\q\e\a\j\f\z\e\m\n\9\x\o\6\m\s\h\k\7\2\9\1\n\6\3\u\k\z\f\8\k\j\w\z ]] 00:07:25.286 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.286 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:25.286 [2024-12-17 00:23:11.239596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:25.286 [2024-12-17 00:23:11.239716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72345 ] 00:07:25.545 [2024-12-17 00:23:11.376768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.545 [2024-12-17 00:23:11.412819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.545 [2024-12-17 00:23:11.439049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.545  [2024-12-17T00:23:11.807Z] Copying: 512/512 [B] (average 500 kBps) 00:07:25.804 00:07:25.804 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ djhsmxs9knh0aleohlr6i63mh9ekid10ubpbb10u78ypffd8x37adsn8c8x7854qnhvo4k4kx5h1q91xjvdjle5wb0v8ma7t9enzz0smaqepp9npkl4vf2liifftxzdczdm8vhqnogd2gy4xpzt9mey6ijg06rwvaqwj3gk7npz147rt75pivsq5t18c1apbmvnc31ort0qpb6r0dzaaqzn8qmdt3nolujr4re7nocxzmwmmj8gl2k5qet3ms48v8o2kjmsfcfxypca5m4uwqn8vz8vst4l4hiy3gahgw2ptv9gbeuu1mw3fnnpmimqmnflp3pc47ro03h7tb3xaxnei4w6f2zgpl339q9k74sed1w2ute1konkwjpzq4aokuyvuua8j053kemyosl9r9v5209bckd2gn037rv7krzk4drqwipuz0uly61ve1wt30gh6a572fzf3a5byvjmd03cwpw6aedjqeajfzemn9xo6mshk7291n63ukzf8kjwz == \d\j\h\s\m\x\s\9\k\n\h\0\a\l\e\o\h\l\r\6\i\6\3\m\h\9\e\k\i\d\1\0\u\b\p\b\b\1\0\u\7\8\y\p\f\f\d\8\x\3\7\a\d\s\n\8\c\8\x\7\8\5\4\q\n\h\v\o\4\k\4\k\x\5\h\1\q\9\1\x\j\v\d\j\l\e\5\w\b\0\v\8\m\a\7\t\9\e\n\z\z\0\s\m\a\q\e\p\p\9\n\p\k\l\4\v\f\2\l\i\i\f\f\t\x\z\d\c\z\d\m\8\v\h\q\n\o\g\d\2\g\y\4\x\p\z\t\9\m\e\y\6\i\j\g\0\6\r\w\v\a\q\w\j\3\g\k\7\n\p\z\1\4\7\r\t\7\5\p\i\v\s\q\5\t\1\8\c\1\a\p\b\m\v\n\c\3\1\o\r\t\0\q\p\b\6\r\0\d\z\a\a\q\z\n\8\q\m\d\t\3\n\o\l\u\j\r\4\r\e\7\n\o\c\x\z\m\w\m\m\j\8\g\l\2\k\5\q\e\t\3\m\s\4\8\v\8\o\2\k\j\m\s\f\c\f\x\y\p\c\a\5\m\4\u\w\q\n\8\v\z\8\v\s\t\4\l\4\h\i\y\3\g\a\h\g\w\2\p\t\v\9\g\b\e\u\u\1\m\w\3\f\n\n\p\m\i\m\q\m\n\f\l\p\3\p\c\4\7\r\o\0\3\h\7\t\b\3\x\a\x\n\e\i\4\w\6\f\2\z\g\p\l\3\3\9\q\9\k\7\4\s\e\d\1\w\2\u\t\e\1\k\o\n\k\w\j\p\z\q\4\a\o\k\u\y\v\u\u\a\8\j\0\5\3\k\e\m\y\o\s\l\9\r\9\v\5\2\0\9\b\c\k\d\2\g\n\0\3\7\r\v\7\k\r\z\k\4\d\r\q\w\i\p\u\z\0\u\l\y\6\1\v\e\1\w\t\3\0\g\h\6\a\5\7\2\f\z\f\3\a\5\b\y\v\j\m\d\0\3\c\w\p\w\6\a\e\d\j\q\e\a\j\f\z\e\m\n\9\x\o\6\m\s\h\k\7\2\9\1\n\6\3\u\k\z\f\8\k\j\w\z ]] 00:07:25.804 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:25.804 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:25.804 [2024-12-17 00:23:11.642791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:25.804 [2024-12-17 00:23:11.642944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72349 ] 00:07:25.804 [2024-12-17 00:23:11.778209] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.064 [2024-12-17 00:23:11.813634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.064 [2024-12-17 00:23:11.845615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.064  [2024-12-17T00:23:12.067Z] Copying: 512/512 [B] (average 125 kBps) 00:07:26.064 00:07:26.064 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ djhsmxs9knh0aleohlr6i63mh9ekid10ubpbb10u78ypffd8x37adsn8c8x7854qnhvo4k4kx5h1q91xjvdjle5wb0v8ma7t9enzz0smaqepp9npkl4vf2liifftxzdczdm8vhqnogd2gy4xpzt9mey6ijg06rwvaqwj3gk7npz147rt75pivsq5t18c1apbmvnc31ort0qpb6r0dzaaqzn8qmdt3nolujr4re7nocxzmwmmj8gl2k5qet3ms48v8o2kjmsfcfxypca5m4uwqn8vz8vst4l4hiy3gahgw2ptv9gbeuu1mw3fnnpmimqmnflp3pc47ro03h7tb3xaxnei4w6f2zgpl339q9k74sed1w2ute1konkwjpzq4aokuyvuua8j053kemyosl9r9v5209bckd2gn037rv7krzk4drqwipuz0uly61ve1wt30gh6a572fzf3a5byvjmd03cwpw6aedjqeajfzemn9xo6mshk7291n63ukzf8kjwz == \d\j\h\s\m\x\s\9\k\n\h\0\a\l\e\o\h\l\r\6\i\6\3\m\h\9\e\k\i\d\1\0\u\b\p\b\b\1\0\u\7\8\y\p\f\f\d\8\x\3\7\a\d\s\n\8\c\8\x\7\8\5\4\q\n\h\v\o\4\k\4\k\x\5\h\1\q\9\1\x\j\v\d\j\l\e\5\w\b\0\v\8\m\a\7\t\9\e\n\z\z\0\s\m\a\q\e\p\p\9\n\p\k\l\4\v\f\2\l\i\i\f\f\t\x\z\d\c\z\d\m\8\v\h\q\n\o\g\d\2\g\y\4\x\p\z\t\9\m\e\y\6\i\j\g\0\6\r\w\v\a\q\w\j\3\g\k\7\n\p\z\1\4\7\r\t\7\5\p\i\v\s\q\5\t\1\8\c\1\a\p\b\m\v\n\c\3\1\o\r\t\0\q\p\b\6\r\0\d\z\a\a\q\z\n\8\q\m\d\t\3\n\o\l\u\j\r\4\r\e\7\n\o\c\x\z\m\w\m\m\j\8\g\l\2\k\5\q\e\t\3\m\s\4\8\v\8\o\2\k\j\m\s\f\c\f\x\y\p\c\a\5\m\4\u\w\q\n\8\v\z\8\v\s\t\4\l\4\h\i\y\3\g\a\h\g\w\2\p\t\v\9\g\b\e\u\u\1\m\w\3\f\n\n\p\m\i\m\q\m\n\f\l\p\3\p\c\4\7\r\o\0\3\h\7\t\b\3\x\a\x\n\e\i\4\w\6\f\2\z\g\p\l\3\3\9\q\9\k\7\4\s\e\d\1\w\2\u\t\e\1\k\o\n\k\w\j\p\z\q\4\a\o\k\u\y\v\u\u\a\8\j\0\5\3\k\e\m\y\o\s\l\9\r\9\v\5\2\0\9\b\c\k\d\2\g\n\0\3\7\r\v\7\k\r\z\k\4\d\r\q\w\i\p\u\z\0\u\l\y\6\1\v\e\1\w\t\3\0\g\h\6\a\5\7\2\f\z\f\3\a\5\b\y\v\j\m\d\0\3\c\w\p\w\6\a\e\d\j\q\e\a\j\f\z\e\m\n\9\x\o\6\m\s\h\k\7\2\9\1\n\6\3\u\k\z\f\8\k\j\w\z ]] 00:07:26.064 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.064 00:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:26.064 [2024-12-17 00:23:12.053356] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.064 [2024-12-17 00:23:12.053469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72358 ] 00:07:26.324 [2024-12-17 00:23:12.192655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.324 [2024-12-17 00:23:12.226411] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.324 [2024-12-17 00:23:12.254052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.324  [2024-12-17T00:23:12.586Z] Copying: 512/512 [B] (average 250 kBps) 00:07:26.583 00:07:26.584 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ djhsmxs9knh0aleohlr6i63mh9ekid10ubpbb10u78ypffd8x37adsn8c8x7854qnhvo4k4kx5h1q91xjvdjle5wb0v8ma7t9enzz0smaqepp9npkl4vf2liifftxzdczdm8vhqnogd2gy4xpzt9mey6ijg06rwvaqwj3gk7npz147rt75pivsq5t18c1apbmvnc31ort0qpb6r0dzaaqzn8qmdt3nolujr4re7nocxzmwmmj8gl2k5qet3ms48v8o2kjmsfcfxypca5m4uwqn8vz8vst4l4hiy3gahgw2ptv9gbeuu1mw3fnnpmimqmnflp3pc47ro03h7tb3xaxnei4w6f2zgpl339q9k74sed1w2ute1konkwjpzq4aokuyvuua8j053kemyosl9r9v5209bckd2gn037rv7krzk4drqwipuz0uly61ve1wt30gh6a572fzf3a5byvjmd03cwpw6aedjqeajfzemn9xo6mshk7291n63ukzf8kjwz == \d\j\h\s\m\x\s\9\k\n\h\0\a\l\e\o\h\l\r\6\i\6\3\m\h\9\e\k\i\d\1\0\u\b\p\b\b\1\0\u\7\8\y\p\f\f\d\8\x\3\7\a\d\s\n\8\c\8\x\7\8\5\4\q\n\h\v\o\4\k\4\k\x\5\h\1\q\9\1\x\j\v\d\j\l\e\5\w\b\0\v\8\m\a\7\t\9\e\n\z\z\0\s\m\a\q\e\p\p\9\n\p\k\l\4\v\f\2\l\i\i\f\f\t\x\z\d\c\z\d\m\8\v\h\q\n\o\g\d\2\g\y\4\x\p\z\t\9\m\e\y\6\i\j\g\0\6\r\w\v\a\q\w\j\3\g\k\7\n\p\z\1\4\7\r\t\7\5\p\i\v\s\q\5\t\1\8\c\1\a\p\b\m\v\n\c\3\1\o\r\t\0\q\p\b\6\r\0\d\z\a\a\q\z\n\8\q\m\d\t\3\n\o\l\u\j\r\4\r\e\7\n\o\c\x\z\m\w\m\m\j\8\g\l\2\k\5\q\e\t\3\m\s\4\8\v\8\o\2\k\j\m\s\f\c\f\x\y\p\c\a\5\m\4\u\w\q\n\8\v\z\8\v\s\t\4\l\4\h\i\y\3\g\a\h\g\w\2\p\t\v\9\g\b\e\u\u\1\m\w\3\f\n\n\p\m\i\m\q\m\n\f\l\p\3\p\c\4\7\r\o\0\3\h\7\t\b\3\x\a\x\n\e\i\4\w\6\f\2\z\g\p\l\3\3\9\q\9\k\7\4\s\e\d\1\w\2\u\t\e\1\k\o\n\k\w\j\p\z\q\4\a\o\k\u\y\v\u\u\a\8\j\0\5\3\k\e\m\y\o\s\l\9\r\9\v\5\2\0\9\b\c\k\d\2\g\n\0\3\7\r\v\7\k\r\z\k\4\d\r\q\w\i\p\u\z\0\u\l\y\6\1\v\e\1\w\t\3\0\g\h\6\a\5\7\2\f\z\f\3\a\5\b\y\v\j\m\d\0\3\c\w\p\w\6\a\e\d\j\q\e\a\j\f\z\e\m\n\9\x\o\6\m\s\h\k\7\2\9\1\n\6\3\u\k\z\f\8\k\j\w\z ]] 00:07:26.584 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:26.584 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:26.584 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:26.584 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:26.584 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.584 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:26.584 [2024-12-17 00:23:12.474851] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.584 [2024-12-17 00:23:12.475237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72368 ] 00:07:26.843 [2024-12-17 00:23:12.611438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.843 [2024-12-17 00:23:12.645410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.843 [2024-12-17 00:23:12.672768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.843  [2024-12-17T00:23:12.846Z] Copying: 512/512 [B] (average 500 kBps) 00:07:26.843 00:07:26.843 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sxbqyd7hpyzpc844vq6d312kigfc876eroisfkvi22w1ea5ub02ngufro1u80ked29i1qwwsncw2zzovm2rv2kfrf9a476cwclqplixvu33sw10xp2ywmelwlyh251nvgzge1lfpggb875ml5pgoa1s86sts4bmnie435n6xg5jp7ps2ca1dcbsdtn95rrht4ksc2oymsp8uitrdrxoz0rh373lhv2x61m0lfnqzdofu4rimuvwpzkfufv78d36r2183xakrnscwwg5g125i8elfluxpqnnwfv782rnc9obh37qbozc8djt20a2bcga041ophdrmr7gtbvj6n8gtkog44s0mwtyzdme4zruzphx2oc40fbzkz2r2nqokobdy0znfo0ld5ux5hg3dp117szotkq4uvqih9uoh3gt202o5e7ctxvihkkfy3zow5retzm64irbfxa6yqsgoedveffzahpmi30tr8lsw0d621nnlbrhhxtdi2unarwdwi7zb == \s\x\b\q\y\d\7\h\p\y\z\p\c\8\4\4\v\q\6\d\3\1\2\k\i\g\f\c\8\7\6\e\r\o\i\s\f\k\v\i\2\2\w\1\e\a\5\u\b\0\2\n\g\u\f\r\o\1\u\8\0\k\e\d\2\9\i\1\q\w\w\s\n\c\w\2\z\z\o\v\m\2\r\v\2\k\f\r\f\9\a\4\7\6\c\w\c\l\q\p\l\i\x\v\u\3\3\s\w\1\0\x\p\2\y\w\m\e\l\w\l\y\h\2\5\1\n\v\g\z\g\e\1\l\f\p\g\g\b\8\7\5\m\l\5\p\g\o\a\1\s\8\6\s\t\s\4\b\m\n\i\e\4\3\5\n\6\x\g\5\j\p\7\p\s\2\c\a\1\d\c\b\s\d\t\n\9\5\r\r\h\t\4\k\s\c\2\o\y\m\s\p\8\u\i\t\r\d\r\x\o\z\0\r\h\3\7\3\l\h\v\2\x\6\1\m\0\l\f\n\q\z\d\o\f\u\4\r\i\m\u\v\w\p\z\k\f\u\f\v\7\8\d\3\6\r\2\1\8\3\x\a\k\r\n\s\c\w\w\g\5\g\1\2\5\i\8\e\l\f\l\u\x\p\q\n\n\w\f\v\7\8\2\r\n\c\9\o\b\h\3\7\q\b\o\z\c\8\d\j\t\2\0\a\2\b\c\g\a\0\4\1\o\p\h\d\r\m\r\7\g\t\b\v\j\6\n\8\g\t\k\o\g\4\4\s\0\m\w\t\y\z\d\m\e\4\z\r\u\z\p\h\x\2\o\c\4\0\f\b\z\k\z\2\r\2\n\q\o\k\o\b\d\y\0\z\n\f\o\0\l\d\5\u\x\5\h\g\3\d\p\1\1\7\s\z\o\t\k\q\4\u\v\q\i\h\9\u\o\h\3\g\t\2\0\2\o\5\e\7\c\t\x\v\i\h\k\k\f\y\3\z\o\w\5\r\e\t\z\m\6\4\i\r\b\f\x\a\6\y\q\s\g\o\e\d\v\e\f\f\z\a\h\p\m\i\3\0\t\r\8\l\s\w\0\d\6\2\1\n\n\l\b\r\h\h\x\t\d\i\2\u\n\a\r\w\d\w\i\7\z\b ]] 00:07:26.843 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:26.843 00:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:27.103 [2024-12-17 00:23:12.867444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.103 [2024-12-17 00:23:12.867567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72372 ] 00:07:27.103 [2024-12-17 00:23:13.005136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.103 [2024-12-17 00:23:13.036675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.103 [2024-12-17 00:23:13.062907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.103  [2024-12-17T00:23:13.365Z] Copying: 512/512 [B] (average 500 kBps) 00:07:27.362 00:07:27.362 00:23:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sxbqyd7hpyzpc844vq6d312kigfc876eroisfkvi22w1ea5ub02ngufro1u80ked29i1qwwsncw2zzovm2rv2kfrf9a476cwclqplixvu33sw10xp2ywmelwlyh251nvgzge1lfpggb875ml5pgoa1s86sts4bmnie435n6xg5jp7ps2ca1dcbsdtn95rrht4ksc2oymsp8uitrdrxoz0rh373lhv2x61m0lfnqzdofu4rimuvwpzkfufv78d36r2183xakrnscwwg5g125i8elfluxpqnnwfv782rnc9obh37qbozc8djt20a2bcga041ophdrmr7gtbvj6n8gtkog44s0mwtyzdme4zruzphx2oc40fbzkz2r2nqokobdy0znfo0ld5ux5hg3dp117szotkq4uvqih9uoh3gt202o5e7ctxvihkkfy3zow5retzm64irbfxa6yqsgoedveffzahpmi30tr8lsw0d621nnlbrhhxtdi2unarwdwi7zb == \s\x\b\q\y\d\7\h\p\y\z\p\c\8\4\4\v\q\6\d\3\1\2\k\i\g\f\c\8\7\6\e\r\o\i\s\f\k\v\i\2\2\w\1\e\a\5\u\b\0\2\n\g\u\f\r\o\1\u\8\0\k\e\d\2\9\i\1\q\w\w\s\n\c\w\2\z\z\o\v\m\2\r\v\2\k\f\r\f\9\a\4\7\6\c\w\c\l\q\p\l\i\x\v\u\3\3\s\w\1\0\x\p\2\y\w\m\e\l\w\l\y\h\2\5\1\n\v\g\z\g\e\1\l\f\p\g\g\b\8\7\5\m\l\5\p\g\o\a\1\s\8\6\s\t\s\4\b\m\n\i\e\4\3\5\n\6\x\g\5\j\p\7\p\s\2\c\a\1\d\c\b\s\d\t\n\9\5\r\r\h\t\4\k\s\c\2\o\y\m\s\p\8\u\i\t\r\d\r\x\o\z\0\r\h\3\7\3\l\h\v\2\x\6\1\m\0\l\f\n\q\z\d\o\f\u\4\r\i\m\u\v\w\p\z\k\f\u\f\v\7\8\d\3\6\r\2\1\8\3\x\a\k\r\n\s\c\w\w\g\5\g\1\2\5\i\8\e\l\f\l\u\x\p\q\n\n\w\f\v\7\8\2\r\n\c\9\o\b\h\3\7\q\b\o\z\c\8\d\j\t\2\0\a\2\b\c\g\a\0\4\1\o\p\h\d\r\m\r\7\g\t\b\v\j\6\n\8\g\t\k\o\g\4\4\s\0\m\w\t\y\z\d\m\e\4\z\r\u\z\p\h\x\2\o\c\4\0\f\b\z\k\z\2\r\2\n\q\o\k\o\b\d\y\0\z\n\f\o\0\l\d\5\u\x\5\h\g\3\d\p\1\1\7\s\z\o\t\k\q\4\u\v\q\i\h\9\u\o\h\3\g\t\2\0\2\o\5\e\7\c\t\x\v\i\h\k\k\f\y\3\z\o\w\5\r\e\t\z\m\6\4\i\r\b\f\x\a\6\y\q\s\g\o\e\d\v\e\f\f\z\a\h\p\m\i\3\0\t\r\8\l\s\w\0\d\6\2\1\n\n\l\b\r\h\h\x\t\d\i\2\u\n\a\r\w\d\w\i\7\z\b ]] 00:07:27.362 00:23:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.362 00:23:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:27.362 [2024-12-17 00:23:13.256036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.362 [2024-12-17 00:23:13.256406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72387 ] 00:07:27.621 [2024-12-17 00:23:13.390087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.621 [2024-12-17 00:23:13.421840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.621 [2024-12-17 00:23:13.448699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.621  [2024-12-17T00:23:13.624Z] Copying: 512/512 [B] (average 500 kBps) 00:07:27.621 00:07:27.621 00:23:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sxbqyd7hpyzpc844vq6d312kigfc876eroisfkvi22w1ea5ub02ngufro1u80ked29i1qwwsncw2zzovm2rv2kfrf9a476cwclqplixvu33sw10xp2ywmelwlyh251nvgzge1lfpggb875ml5pgoa1s86sts4bmnie435n6xg5jp7ps2ca1dcbsdtn95rrht4ksc2oymsp8uitrdrxoz0rh373lhv2x61m0lfnqzdofu4rimuvwpzkfufv78d36r2183xakrnscwwg5g125i8elfluxpqnnwfv782rnc9obh37qbozc8djt20a2bcga041ophdrmr7gtbvj6n8gtkog44s0mwtyzdme4zruzphx2oc40fbzkz2r2nqokobdy0znfo0ld5ux5hg3dp117szotkq4uvqih9uoh3gt202o5e7ctxvihkkfy3zow5retzm64irbfxa6yqsgoedveffzahpmi30tr8lsw0d621nnlbrhhxtdi2unarwdwi7zb == \s\x\b\q\y\d\7\h\p\y\z\p\c\8\4\4\v\q\6\d\3\1\2\k\i\g\f\c\8\7\6\e\r\o\i\s\f\k\v\i\2\2\w\1\e\a\5\u\b\0\2\n\g\u\f\r\o\1\u\8\0\k\e\d\2\9\i\1\q\w\w\s\n\c\w\2\z\z\o\v\m\2\r\v\2\k\f\r\f\9\a\4\7\6\c\w\c\l\q\p\l\i\x\v\u\3\3\s\w\1\0\x\p\2\y\w\m\e\l\w\l\y\h\2\5\1\n\v\g\z\g\e\1\l\f\p\g\g\b\8\7\5\m\l\5\p\g\o\a\1\s\8\6\s\t\s\4\b\m\n\i\e\4\3\5\n\6\x\g\5\j\p\7\p\s\2\c\a\1\d\c\b\s\d\t\n\9\5\r\r\h\t\4\k\s\c\2\o\y\m\s\p\8\u\i\t\r\d\r\x\o\z\0\r\h\3\7\3\l\h\v\2\x\6\1\m\0\l\f\n\q\z\d\o\f\u\4\r\i\m\u\v\w\p\z\k\f\u\f\v\7\8\d\3\6\r\2\1\8\3\x\a\k\r\n\s\c\w\w\g\5\g\1\2\5\i\8\e\l\f\l\u\x\p\q\n\n\w\f\v\7\8\2\r\n\c\9\o\b\h\3\7\q\b\o\z\c\8\d\j\t\2\0\a\2\b\c\g\a\0\4\1\o\p\h\d\r\m\r\7\g\t\b\v\j\6\n\8\g\t\k\o\g\4\4\s\0\m\w\t\y\z\d\m\e\4\z\r\u\z\p\h\x\2\o\c\4\0\f\b\z\k\z\2\r\2\n\q\o\k\o\b\d\y\0\z\n\f\o\0\l\d\5\u\x\5\h\g\3\d\p\1\1\7\s\z\o\t\k\q\4\u\v\q\i\h\9\u\o\h\3\g\t\2\0\2\o\5\e\7\c\t\x\v\i\h\k\k\f\y\3\z\o\w\5\r\e\t\z\m\6\4\i\r\b\f\x\a\6\y\q\s\g\o\e\d\v\e\f\f\z\a\h\p\m\i\3\0\t\r\8\l\s\w\0\d\6\2\1\n\n\l\b\r\h\h\x\t\d\i\2\u\n\a\r\w\d\w\i\7\z\b ]] 00:07:27.621 00:23:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:27.621 00:23:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:27.881 [2024-12-17 00:23:13.661729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.881 [2024-12-17 00:23:13.661841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72391 ] 00:07:27.881 [2024-12-17 00:23:13.796290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.881 [2024-12-17 00:23:13.830293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.881 [2024-12-17 00:23:13.857831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.881  [2024-12-17T00:23:14.143Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.140 00:07:28.140 ************************************ 00:07:28.140 END TEST dd_flags_misc 00:07:28.140 ************************************ 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sxbqyd7hpyzpc844vq6d312kigfc876eroisfkvi22w1ea5ub02ngufro1u80ked29i1qwwsncw2zzovm2rv2kfrf9a476cwclqplixvu33sw10xp2ywmelwlyh251nvgzge1lfpggb875ml5pgoa1s86sts4bmnie435n6xg5jp7ps2ca1dcbsdtn95rrht4ksc2oymsp8uitrdrxoz0rh373lhv2x61m0lfnqzdofu4rimuvwpzkfufv78d36r2183xakrnscwwg5g125i8elfluxpqnnwfv782rnc9obh37qbozc8djt20a2bcga041ophdrmr7gtbvj6n8gtkog44s0mwtyzdme4zruzphx2oc40fbzkz2r2nqokobdy0znfo0ld5ux5hg3dp117szotkq4uvqih9uoh3gt202o5e7ctxvihkkfy3zow5retzm64irbfxa6yqsgoedveffzahpmi30tr8lsw0d621nnlbrhhxtdi2unarwdwi7zb == \s\x\b\q\y\d\7\h\p\y\z\p\c\8\4\4\v\q\6\d\3\1\2\k\i\g\f\c\8\7\6\e\r\o\i\s\f\k\v\i\2\2\w\1\e\a\5\u\b\0\2\n\g\u\f\r\o\1\u\8\0\k\e\d\2\9\i\1\q\w\w\s\n\c\w\2\z\z\o\v\m\2\r\v\2\k\f\r\f\9\a\4\7\6\c\w\c\l\q\p\l\i\x\v\u\3\3\s\w\1\0\x\p\2\y\w\m\e\l\w\l\y\h\2\5\1\n\v\g\z\g\e\1\l\f\p\g\g\b\8\7\5\m\l\5\p\g\o\a\1\s\8\6\s\t\s\4\b\m\n\i\e\4\3\5\n\6\x\g\5\j\p\7\p\s\2\c\a\1\d\c\b\s\d\t\n\9\5\r\r\h\t\4\k\s\c\2\o\y\m\s\p\8\u\i\t\r\d\r\x\o\z\0\r\h\3\7\3\l\h\v\2\x\6\1\m\0\l\f\n\q\z\d\o\f\u\4\r\i\m\u\v\w\p\z\k\f\u\f\v\7\8\d\3\6\r\2\1\8\3\x\a\k\r\n\s\c\w\w\g\5\g\1\2\5\i\8\e\l\f\l\u\x\p\q\n\n\w\f\v\7\8\2\r\n\c\9\o\b\h\3\7\q\b\o\z\c\8\d\j\t\2\0\a\2\b\c\g\a\0\4\1\o\p\h\d\r\m\r\7\g\t\b\v\j\6\n\8\g\t\k\o\g\4\4\s\0\m\w\t\y\z\d\m\e\4\z\r\u\z\p\h\x\2\o\c\4\0\f\b\z\k\z\2\r\2\n\q\o\k\o\b\d\y\0\z\n\f\o\0\l\d\5\u\x\5\h\g\3\d\p\1\1\7\s\z\o\t\k\q\4\u\v\q\i\h\9\u\o\h\3\g\t\2\0\2\o\5\e\7\c\t\x\v\i\h\k\k\f\y\3\z\o\w\5\r\e\t\z\m\6\4\i\r\b\f\x\a\6\y\q\s\g\o\e\d\v\e\f\f\z\a\h\p\m\i\3\0\t\r\8\l\s\w\0\d\6\2\1\n\n\l\b\r\h\h\x\t\d\i\2\u\n\a\r\w\d\w\i\7\z\b ]] 00:07:28.140 00:07:28.140 real 0m3.228s 00:07:28.140 user 0m1.599s 00:07:28.140 sys 0m1.387s 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:28.140 * Second test run, disabling liburing, forcing AIO 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.140 00:23:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.141 ************************************ 00:07:28.141 START TEST dd_flag_append_forced_aio 00:07:28.141 ************************************ 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=4gby63qhkl4u8m17eifqyovhsyakjchs 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=kyoo2l9jgxreyr9dmigvkzdg4uezry6a 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 4gby63qhkl4u8m17eifqyovhsyakjchs 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s kyoo2l9jgxreyr9dmigvkzdg4uezry6a 00:07:28.141 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:28.141 [2024-12-17 00:23:14.131871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.141 [2024-12-17 00:23:14.132274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72419 ] 00:07:28.400 [2024-12-17 00:23:14.268205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.400 [2024-12-17 00:23:14.302182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.400 [2024-12-17 00:23:14.330633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.400  [2024-12-17T00:23:14.663Z] Copying: 32/32 [B] (average 31 kBps) 00:07:28.660 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ kyoo2l9jgxreyr9dmigvkzdg4uezry6a4gby63qhkl4u8m17eifqyovhsyakjchs == \k\y\o\o\2\l\9\j\g\x\r\e\y\r\9\d\m\i\g\v\k\z\d\g\4\u\e\z\r\y\6\a\4\g\b\y\6\3\q\h\k\l\4\u\8\m\1\7\e\i\f\q\y\o\v\h\s\y\a\k\j\c\h\s ]] 00:07:28.660 00:07:28.660 real 0m0.458s 00:07:28.660 user 0m0.241s 00:07:28.660 sys 0m0.093s 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:28.660 ************************************ 00:07:28.660 END TEST dd_flag_append_forced_aio 00:07:28.660 ************************************ 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.660 ************************************ 00:07:28.660 START TEST dd_flag_directory_forced_aio 00:07:28.660 ************************************ 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.660 00:23:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.660 [2024-12-17 00:23:14.640769] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.660 [2024-12-17 00:23:14.640911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72446 ] 00:07:28.919 [2024-12-17 00:23:14.778173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.919 [2024-12-17 00:23:14.812544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.919 [2024-12-17 00:23:14.838884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.919 [2024-12-17 00:23:14.853806] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.919 [2024-12-17 00:23:14.853881] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:28.919 [2024-12-17 00:23:14.853909] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.919 [2024-12-17 00:23:14.916682] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.179 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:29.179 [2024-12-17 00:23:15.072305] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.179 [2024-12-17 00:23:15.072463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72455 ] 00:07:29.438 [2024-12-17 00:23:15.207186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.438 [2024-12-17 00:23:15.241320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.438 [2024-12-17 00:23:15.269246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.438 [2024-12-17 00:23:15.283883] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.438 [2024-12-17 00:23:15.283959] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:29.438 [2024-12-17 00:23:15.284003] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.438 [2024-12-17 00:23:15.346900] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:29.438 ************************************ 00:07:29.438 END TEST dd_flag_directory_forced_aio 00:07:29.438 ************************************ 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.438 00:07:29.438 real 0m0.852s 00:07:29.438 user 0m0.434s 00:07:29.438 sys 0m0.209s 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.438 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:29.697 ************************************ 00:07:29.697 START TEST dd_flag_nofollow_forced_aio 00:07:29.697 ************************************ 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.697 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.697 [2024-12-17 00:23:15.548325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.697 [2024-12-17 00:23:15.548457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72484 ] 00:07:29.697 [2024-12-17 00:23:15.688630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.956 [2024-12-17 00:23:15.721735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.956 [2024-12-17 00:23:15.748726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.956 [2024-12-17 00:23:15.762662] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:29.956 [2024-12-17 00:23:15.762730] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:29.956 [2024-12-17 00:23:15.762761] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.956 [2024-12-17 00:23:15.817873] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:29.956 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.957 00:23:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:29.957 [2024-12-17 00:23:15.947470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.957 [2024-12-17 00:23:15.947568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72488 ] 00:07:30.216 [2024-12-17 00:23:16.083959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.216 [2024-12-17 00:23:16.117539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.216 [2024-12-17 00:23:16.145965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.216 [2024-12-17 00:23:16.161594] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:30.216 [2024-12-17 00:23:16.161639] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:30.216 [2024-12-17 00:23:16.161654] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.475 [2024-12-17 00:23:16.226986] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.475 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.475 [2024-12-17 00:23:16.377221] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.475 [2024-12-17 00:23:16.377376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72501 ] 00:07:30.735 [2024-12-17 00:23:16.510473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.735 [2024-12-17 00:23:16.544293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.735 [2024-12-17 00:23:16.571704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.735  [2024-12-17T00:23:16.997Z] Copying: 512/512 [B] (average 500 kBps) 00:07:30.994 00:07:30.994 ************************************ 00:07:30.994 END TEST dd_flag_nofollow_forced_aio 00:07:30.994 ************************************ 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ i4q2kvvy8gv8gsi5q18b564yzcxssejl2sh82wi0qbj68k6yzrud4ytxupk92d1gswzq7kazr0t2v0f3sbn87ysnbzl525fmids34v6wgbrlhn4aqzsjjy84ro8dt1fixw4vewm48jajl5mhr24orlvotgdhy8ofjjo7gyis4mp2sstc24z0wj8cy4m25a8h3lh46wcfvrysxk7x45ojy1477ggs0cjlk4jdexwzuek3aniaxgtbcgr77u5hy9pbli9jepsrma2lnn3emxho6vhg8yet4lgijar28dshhr01pz75uxhq6ezjk5fjbz6vt7ox5hlsl0gpja9n0t2u83uoq7i6l9ymo0q53dqbg2y7u3grrkixs85o7as5p3tipqnnlvkmxl2dbtf3hybnisj3cbufdegbe6w4hcdfjdp1ztkfbz6r0j0ga8uov2bbuwuknp4brtw70y31m7icwo72xq8srvpg63n4fumee9erapmjg9yt1nurp3a3mg26 == \i\4\q\2\k\v\v\y\8\g\v\8\g\s\i\5\q\1\8\b\5\6\4\y\z\c\x\s\s\e\j\l\2\s\h\8\2\w\i\0\q\b\j\6\8\k\6\y\z\r\u\d\4\y\t\x\u\p\k\9\2\d\1\g\s\w\z\q\7\k\a\z\r\0\t\2\v\0\f\3\s\b\n\8\7\y\s\n\b\z\l\5\2\5\f\m\i\d\s\3\4\v\6\w\g\b\r\l\h\n\4\a\q\z\s\j\j\y\8\4\r\o\8\d\t\1\f\i\x\w\4\v\e\w\m\4\8\j\a\j\l\5\m\h\r\2\4\o\r\l\v\o\t\g\d\h\y\8\o\f\j\j\o\7\g\y\i\s\4\m\p\2\s\s\t\c\2\4\z\0\w\j\8\c\y\4\m\2\5\a\8\h\3\l\h\4\6\w\c\f\v\r\y\s\x\k\7\x\4\5\o\j\y\1\4\7\7\g\g\s\0\c\j\l\k\4\j\d\e\x\w\z\u\e\k\3\a\n\i\a\x\g\t\b\c\g\r\7\7\u\5\h\y\9\p\b\l\i\9\j\e\p\s\r\m\a\2\l\n\n\3\e\m\x\h\o\6\v\h\g\8\y\e\t\4\l\g\i\j\a\r\2\8\d\s\h\h\r\0\1\p\z\7\5\u\x\h\q\6\e\z\j\k\5\f\j\b\z\6\v\t\7\o\x\5\h\l\s\l\0\g\p\j\a\9\n\0\t\2\u\8\3\u\o\q\7\i\6\l\9\y\m\o\0\q\5\3\d\q\b\g\2\y\7\u\3\g\r\r\k\i\x\s\8\5\o\7\a\s\5\p\3\t\i\p\q\n\n\l\v\k\m\x\l\2\d\b\t\f\3\h\y\b\n\i\s\j\3\c\b\u\f\d\e\g\b\e\6\w\4\h\c\d\f\j\d\p\1\z\t\k\f\b\z\6\r\0\j\0\g\a\8\u\o\v\2\b\b\u\w\u\k\n\p\4\b\r\t\w\7\0\y\3\1\m\7\i\c\w\o\7\2\x\q\8\s\r\v\p\g\6\3\n\4\f\u\m\e\e\9\e\r\a\p\m\j\g\9\y\t\1\n\u\r\p\3\a\3\m\g\2\6 ]] 00:07:30.994 00:07:30.994 real 0m1.290s 00:07:30.994 user 0m0.663s 00:07:30.994 sys 0m0.296s 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:30.994 ************************************ 00:07:30.994 START TEST dd_flag_noatime_forced_aio 00:07:30.994 ************************************ 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1734394996 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1734394996 00:07:30.994 00:23:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:31.931 00:23:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.931 [2024-12-17 00:23:17.906716] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:31.932 [2024-12-17 00:23:17.907207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72536 ] 00:07:32.191 [2024-12-17 00:23:18.048199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.191 [2024-12-17 00:23:18.090329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.191 [2024-12-17 00:23:18.123128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.191  [2024-12-17T00:23:18.453Z] Copying: 512/512 [B] (average 500 kBps) 00:07:32.450 00:07:32.450 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.450 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1734394996 )) 00:07:32.450 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.450 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1734394996 )) 00:07:32.450 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.450 [2024-12-17 00:23:18.381739] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.450 [2024-12-17 00:23:18.381834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72553 ] 00:07:32.709 [2024-12-17 00:23:18.517271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.709 [2024-12-17 00:23:18.551602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.709 [2024-12-17 00:23:18.579332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.709  [2024-12-17T00:23:18.975Z] Copying: 512/512 [B] (average 500 kBps) 00:07:32.972 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1734394998 )) 00:07:32.972 00:07:32.972 real 0m1.945s 00:07:32.972 user 0m0.460s 00:07:32.972 sys 0m0.226s 00:07:32.972 ************************************ 00:07:32.972 END TEST dd_flag_noatime_forced_aio 00:07:32.972 ************************************ 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:32.972 ************************************ 00:07:32.972 START TEST dd_flags_misc_forced_aio 00:07:32.972 ************************************ 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:32.972 00:23:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:32.972 [2024-12-17 00:23:18.883091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.972 [2024-12-17 00:23:18.883195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72574 ] 00:07:33.267 [2024-12-17 00:23:19.012063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.267 [2024-12-17 00:23:19.046076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.267 [2024-12-17 00:23:19.073655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.267  [2024-12-17T00:23:19.270Z] Copying: 512/512 [B] (average 500 kBps) 00:07:33.267 00:07:33.526 00:23:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8cyngkjt73c7ony7k26edykppsw2efo7do8qlzquejzydwdfx77lzngoju9it5nasbn57ho84taaji7pwl5yuq8h1xxpagqjq3mj6kx8gzdfuu088c7l503goqy8tb4zrilo2zy6qdpvat0s0ykieabye0il107zc3sh0rmvxinmu098or0z0ql2650wigs5qy0rnbftq4k6xghdi7hkfhn4il5086hu63rbbhik0sa5o6bgt7lvnx31lx1xxm2nb9eq7elcbkqi8l3ftx3cn83qahmh6ofoahwe07913he2p260yln5qwp26897s9kfj9a7cnzbn25mgg56vawm3kckci9cxppv1qz6b1bhm66dndut4xg5jhsbmvl3371fiqgfplh12gu1pu5lf2tosltr0s31lp25ydvis6sm2dzmxx4tmfqx2rjgo78vktqnxm63rlmc98iu42jxnv0s3ksi5wni77xfixlog4nlnz5tclqi8nsyz5mk5y3enowv == \8\c\y\n\g\k\j\t\7\3\c\7\o\n\y\7\k\2\6\e\d\y\k\p\p\s\w\2\e\f\o\7\d\o\8\q\l\z\q\u\e\j\z\y\d\w\d\f\x\7\7\l\z\n\g\o\j\u\9\i\t\5\n\a\s\b\n\5\7\h\o\8\4\t\a\a\j\i\7\p\w\l\5\y\u\q\8\h\1\x\x\p\a\g\q\j\q\3\m\j\6\k\x\8\g\z\d\f\u\u\0\8\8\c\7\l\5\0\3\g\o\q\y\8\t\b\4\z\r\i\l\o\2\z\y\6\q\d\p\v\a\t\0\s\0\y\k\i\e\a\b\y\e\0\i\l\1\0\7\z\c\3\s\h\0\r\m\v\x\i\n\m\u\0\9\8\o\r\0\z\0\q\l\2\6\5\0\w\i\g\s\5\q\y\0\r\n\b\f\t\q\4\k\6\x\g\h\d\i\7\h\k\f\h\n\4\i\l\5\0\8\6\h\u\6\3\r\b\b\h\i\k\0\s\a\5\o\6\b\g\t\7\l\v\n\x\3\1\l\x\1\x\x\m\2\n\b\9\e\q\7\e\l\c\b\k\q\i\8\l\3\f\t\x\3\c\n\8\3\q\a\h\m\h\6\o\f\o\a\h\w\e\0\7\9\1\3\h\e\2\p\2\6\0\y\l\n\5\q\w\p\2\6\8\9\7\s\9\k\f\j\9\a\7\c\n\z\b\n\2\5\m\g\g\5\6\v\a\w\m\3\k\c\k\c\i\9\c\x\p\p\v\1\q\z\6\b\1\b\h\m\6\6\d\n\d\u\t\4\x\g\5\j\h\s\b\m\v\l\3\3\7\1\f\i\q\g\f\p\l\h\1\2\g\u\1\p\u\5\l\f\2\t\o\s\l\t\r\0\s\3\1\l\p\2\5\y\d\v\i\s\6\s\m\2\d\z\m\x\x\4\t\m\f\q\x\2\r\j\g\o\7\8\v\k\t\q\n\x\m\6\3\r\l\m\c\9\8\i\u\4\2\j\x\n\v\0\s\3\k\s\i\5\w\n\i\7\7\x\f\i\x\l\o\g\4\n\l\n\z\5\t\c\l\q\i\8\n\s\y\z\5\m\k\5\y\3\e\n\o\w\v ]] 00:07:33.527 00:23:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.527 00:23:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:33.527 [2024-12-17 00:23:19.323997] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.527 [2024-12-17 00:23:19.324111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72587 ] 00:07:33.527 [2024-12-17 00:23:19.461420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.527 [2024-12-17 00:23:19.498341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.527 [2024-12-17 00:23:19.527035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.786  [2024-12-17T00:23:19.789Z] Copying: 512/512 [B] (average 500 kBps) 00:07:33.786 00:07:33.786 00:23:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8cyngkjt73c7ony7k26edykppsw2efo7do8qlzquejzydwdfx77lzngoju9it5nasbn57ho84taaji7pwl5yuq8h1xxpagqjq3mj6kx8gzdfuu088c7l503goqy8tb4zrilo2zy6qdpvat0s0ykieabye0il107zc3sh0rmvxinmu098or0z0ql2650wigs5qy0rnbftq4k6xghdi7hkfhn4il5086hu63rbbhik0sa5o6bgt7lvnx31lx1xxm2nb9eq7elcbkqi8l3ftx3cn83qahmh6ofoahwe07913he2p260yln5qwp26897s9kfj9a7cnzbn25mgg56vawm3kckci9cxppv1qz6b1bhm66dndut4xg5jhsbmvl3371fiqgfplh12gu1pu5lf2tosltr0s31lp25ydvis6sm2dzmxx4tmfqx2rjgo78vktqnxm63rlmc98iu42jxnv0s3ksi5wni77xfixlog4nlnz5tclqi8nsyz5mk5y3enowv == \8\c\y\n\g\k\j\t\7\3\c\7\o\n\y\7\k\2\6\e\d\y\k\p\p\s\w\2\e\f\o\7\d\o\8\q\l\z\q\u\e\j\z\y\d\w\d\f\x\7\7\l\z\n\g\o\j\u\9\i\t\5\n\a\s\b\n\5\7\h\o\8\4\t\a\a\j\i\7\p\w\l\5\y\u\q\8\h\1\x\x\p\a\g\q\j\q\3\m\j\6\k\x\8\g\z\d\f\u\u\0\8\8\c\7\l\5\0\3\g\o\q\y\8\t\b\4\z\r\i\l\o\2\z\y\6\q\d\p\v\a\t\0\s\0\y\k\i\e\a\b\y\e\0\i\l\1\0\7\z\c\3\s\h\0\r\m\v\x\i\n\m\u\0\9\8\o\r\0\z\0\q\l\2\6\5\0\w\i\g\s\5\q\y\0\r\n\b\f\t\q\4\k\6\x\g\h\d\i\7\h\k\f\h\n\4\i\l\5\0\8\6\h\u\6\3\r\b\b\h\i\k\0\s\a\5\o\6\b\g\t\7\l\v\n\x\3\1\l\x\1\x\x\m\2\n\b\9\e\q\7\e\l\c\b\k\q\i\8\l\3\f\t\x\3\c\n\8\3\q\a\h\m\h\6\o\f\o\a\h\w\e\0\7\9\1\3\h\e\2\p\2\6\0\y\l\n\5\q\w\p\2\6\8\9\7\s\9\k\f\j\9\a\7\c\n\z\b\n\2\5\m\g\g\5\6\v\a\w\m\3\k\c\k\c\i\9\c\x\p\p\v\1\q\z\6\b\1\b\h\m\6\6\d\n\d\u\t\4\x\g\5\j\h\s\b\m\v\l\3\3\7\1\f\i\q\g\f\p\l\h\1\2\g\u\1\p\u\5\l\f\2\t\o\s\l\t\r\0\s\3\1\l\p\2\5\y\d\v\i\s\6\s\m\2\d\z\m\x\x\4\t\m\f\q\x\2\r\j\g\o\7\8\v\k\t\q\n\x\m\6\3\r\l\m\c\9\8\i\u\4\2\j\x\n\v\0\s\3\k\s\i\5\w\n\i\7\7\x\f\i\x\l\o\g\4\n\l\n\z\5\t\c\l\q\i\8\n\s\y\z\5\m\k\5\y\3\e\n\o\w\v ]] 00:07:33.786 00:23:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:33.786 00:23:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:33.786 [2024-12-17 00:23:19.764335] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.786 [2024-12-17 00:23:19.764442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72589 ] 00:07:34.045 [2024-12-17 00:23:19.900965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.045 [2024-12-17 00:23:19.934865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.045 [2024-12-17 00:23:19.962557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.045  [2024-12-17T00:23:20.307Z] Copying: 512/512 [B] (average 250 kBps) 00:07:34.304 00:07:34.304 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8cyngkjt73c7ony7k26edykppsw2efo7do8qlzquejzydwdfx77lzngoju9it5nasbn57ho84taaji7pwl5yuq8h1xxpagqjq3mj6kx8gzdfuu088c7l503goqy8tb4zrilo2zy6qdpvat0s0ykieabye0il107zc3sh0rmvxinmu098or0z0ql2650wigs5qy0rnbftq4k6xghdi7hkfhn4il5086hu63rbbhik0sa5o6bgt7lvnx31lx1xxm2nb9eq7elcbkqi8l3ftx3cn83qahmh6ofoahwe07913he2p260yln5qwp26897s9kfj9a7cnzbn25mgg56vawm3kckci9cxppv1qz6b1bhm66dndut4xg5jhsbmvl3371fiqgfplh12gu1pu5lf2tosltr0s31lp25ydvis6sm2dzmxx4tmfqx2rjgo78vktqnxm63rlmc98iu42jxnv0s3ksi5wni77xfixlog4nlnz5tclqi8nsyz5mk5y3enowv == \8\c\y\n\g\k\j\t\7\3\c\7\o\n\y\7\k\2\6\e\d\y\k\p\p\s\w\2\e\f\o\7\d\o\8\q\l\z\q\u\e\j\z\y\d\w\d\f\x\7\7\l\z\n\g\o\j\u\9\i\t\5\n\a\s\b\n\5\7\h\o\8\4\t\a\a\j\i\7\p\w\l\5\y\u\q\8\h\1\x\x\p\a\g\q\j\q\3\m\j\6\k\x\8\g\z\d\f\u\u\0\8\8\c\7\l\5\0\3\g\o\q\y\8\t\b\4\z\r\i\l\o\2\z\y\6\q\d\p\v\a\t\0\s\0\y\k\i\e\a\b\y\e\0\i\l\1\0\7\z\c\3\s\h\0\r\m\v\x\i\n\m\u\0\9\8\o\r\0\z\0\q\l\2\6\5\0\w\i\g\s\5\q\y\0\r\n\b\f\t\q\4\k\6\x\g\h\d\i\7\h\k\f\h\n\4\i\l\5\0\8\6\h\u\6\3\r\b\b\h\i\k\0\s\a\5\o\6\b\g\t\7\l\v\n\x\3\1\l\x\1\x\x\m\2\n\b\9\e\q\7\e\l\c\b\k\q\i\8\l\3\f\t\x\3\c\n\8\3\q\a\h\m\h\6\o\f\o\a\h\w\e\0\7\9\1\3\h\e\2\p\2\6\0\y\l\n\5\q\w\p\2\6\8\9\7\s\9\k\f\j\9\a\7\c\n\z\b\n\2\5\m\g\g\5\6\v\a\w\m\3\k\c\k\c\i\9\c\x\p\p\v\1\q\z\6\b\1\b\h\m\6\6\d\n\d\u\t\4\x\g\5\j\h\s\b\m\v\l\3\3\7\1\f\i\q\g\f\p\l\h\1\2\g\u\1\p\u\5\l\f\2\t\o\s\l\t\r\0\s\3\1\l\p\2\5\y\d\v\i\s\6\s\m\2\d\z\m\x\x\4\t\m\f\q\x\2\r\j\g\o\7\8\v\k\t\q\n\x\m\6\3\r\l\m\c\9\8\i\u\4\2\j\x\n\v\0\s\3\k\s\i\5\w\n\i\7\7\x\f\i\x\l\o\g\4\n\l\n\z\5\t\c\l\q\i\8\n\s\y\z\5\m\k\5\y\3\e\n\o\w\v ]] 00:07:34.304 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.304 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:34.304 [2024-12-17 00:23:20.211454] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.304 [2024-12-17 00:23:20.211564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72591 ] 00:07:34.563 [2024-12-17 00:23:20.348913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.563 [2024-12-17 00:23:20.381429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.563 [2024-12-17 00:23:20.408633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.563  [2024-12-17T00:23:20.825Z] Copying: 512/512 [B] (average 250 kBps) 00:07:34.822 00:07:34.822 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8cyngkjt73c7ony7k26edykppsw2efo7do8qlzquejzydwdfx77lzngoju9it5nasbn57ho84taaji7pwl5yuq8h1xxpagqjq3mj6kx8gzdfuu088c7l503goqy8tb4zrilo2zy6qdpvat0s0ykieabye0il107zc3sh0rmvxinmu098or0z0ql2650wigs5qy0rnbftq4k6xghdi7hkfhn4il5086hu63rbbhik0sa5o6bgt7lvnx31lx1xxm2nb9eq7elcbkqi8l3ftx3cn83qahmh6ofoahwe07913he2p260yln5qwp26897s9kfj9a7cnzbn25mgg56vawm3kckci9cxppv1qz6b1bhm66dndut4xg5jhsbmvl3371fiqgfplh12gu1pu5lf2tosltr0s31lp25ydvis6sm2dzmxx4tmfqx2rjgo78vktqnxm63rlmc98iu42jxnv0s3ksi5wni77xfixlog4nlnz5tclqi8nsyz5mk5y3enowv == \8\c\y\n\g\k\j\t\7\3\c\7\o\n\y\7\k\2\6\e\d\y\k\p\p\s\w\2\e\f\o\7\d\o\8\q\l\z\q\u\e\j\z\y\d\w\d\f\x\7\7\l\z\n\g\o\j\u\9\i\t\5\n\a\s\b\n\5\7\h\o\8\4\t\a\a\j\i\7\p\w\l\5\y\u\q\8\h\1\x\x\p\a\g\q\j\q\3\m\j\6\k\x\8\g\z\d\f\u\u\0\8\8\c\7\l\5\0\3\g\o\q\y\8\t\b\4\z\r\i\l\o\2\z\y\6\q\d\p\v\a\t\0\s\0\y\k\i\e\a\b\y\e\0\i\l\1\0\7\z\c\3\s\h\0\r\m\v\x\i\n\m\u\0\9\8\o\r\0\z\0\q\l\2\6\5\0\w\i\g\s\5\q\y\0\r\n\b\f\t\q\4\k\6\x\g\h\d\i\7\h\k\f\h\n\4\i\l\5\0\8\6\h\u\6\3\r\b\b\h\i\k\0\s\a\5\o\6\b\g\t\7\l\v\n\x\3\1\l\x\1\x\x\m\2\n\b\9\e\q\7\e\l\c\b\k\q\i\8\l\3\f\t\x\3\c\n\8\3\q\a\h\m\h\6\o\f\o\a\h\w\e\0\7\9\1\3\h\e\2\p\2\6\0\y\l\n\5\q\w\p\2\6\8\9\7\s\9\k\f\j\9\a\7\c\n\z\b\n\2\5\m\g\g\5\6\v\a\w\m\3\k\c\k\c\i\9\c\x\p\p\v\1\q\z\6\b\1\b\h\m\6\6\d\n\d\u\t\4\x\g\5\j\h\s\b\m\v\l\3\3\7\1\f\i\q\g\f\p\l\h\1\2\g\u\1\p\u\5\l\f\2\t\o\s\l\t\r\0\s\3\1\l\p\2\5\y\d\v\i\s\6\s\m\2\d\z\m\x\x\4\t\m\f\q\x\2\r\j\g\o\7\8\v\k\t\q\n\x\m\6\3\r\l\m\c\9\8\i\u\4\2\j\x\n\v\0\s\3\k\s\i\5\w\n\i\7\7\x\f\i\x\l\o\g\4\n\l\n\z\5\t\c\l\q\i\8\n\s\y\z\5\m\k\5\y\3\e\n\o\w\v ]] 00:07:34.822 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:34.822 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:34.822 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:34.822 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:34.822 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:34.822 00:23:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:34.822 [2024-12-17 00:23:20.657212] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.822 [2024-12-17 00:23:20.657534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72604 ] 00:07:34.822 [2024-12-17 00:23:20.794596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.081 [2024-12-17 00:23:20.832028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.081 [2024-12-17 00:23:20.859090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.081  [2024-12-17T00:23:21.084Z] Copying: 512/512 [B] (average 500 kBps) 00:07:35.081 00:07:35.081 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s5ym3qnum2bxnvxevngitmpvhroi0opemypd8x9dtokntpids0da8au8qswdqm7gi9qx3fpth24j4mjnrnsbxt7zihxgofk0kfu8os6i60392yo312llmp8j48f1lvjm02x3vodcdv3dbv0a67hnfay9z9jpwlg7tskazwe4i3v8z2fcurhj8oszjxy567527hke424zw60gzgmdq3bnsedcsx20n8vf1u0l3kdyukx8erml7o8t2dhl2pdolu92cvwef6uopm6g9vc53d7qkevvpn8o0evohwc6lk4nomkx9d75e5backn7qha1vxl0yda0qd4tmce21xozoekt0tv7ctzaqbby2ympbuprlhq7ss058fbvvjx1kbeidm5ko8cmeovuq9vwgehljca48o88m6cc4yyta78exgs240b9lx13k6dga0qtned4zqt5a8pd51ed6nc3ll3ffdkj2f1rbi79mqymp4zes92xydl8qj3qz4lrsou3l7k6mvrx == \s\5\y\m\3\q\n\u\m\2\b\x\n\v\x\e\v\n\g\i\t\m\p\v\h\r\o\i\0\o\p\e\m\y\p\d\8\x\9\d\t\o\k\n\t\p\i\d\s\0\d\a\8\a\u\8\q\s\w\d\q\m\7\g\i\9\q\x\3\f\p\t\h\2\4\j\4\m\j\n\r\n\s\b\x\t\7\z\i\h\x\g\o\f\k\0\k\f\u\8\o\s\6\i\6\0\3\9\2\y\o\3\1\2\l\l\m\p\8\j\4\8\f\1\l\v\j\m\0\2\x\3\v\o\d\c\d\v\3\d\b\v\0\a\6\7\h\n\f\a\y\9\z\9\j\p\w\l\g\7\t\s\k\a\z\w\e\4\i\3\v\8\z\2\f\c\u\r\h\j\8\o\s\z\j\x\y\5\6\7\5\2\7\h\k\e\4\2\4\z\w\6\0\g\z\g\m\d\q\3\b\n\s\e\d\c\s\x\2\0\n\8\v\f\1\u\0\l\3\k\d\y\u\k\x\8\e\r\m\l\7\o\8\t\2\d\h\l\2\p\d\o\l\u\9\2\c\v\w\e\f\6\u\o\p\m\6\g\9\v\c\5\3\d\7\q\k\e\v\v\p\n\8\o\0\e\v\o\h\w\c\6\l\k\4\n\o\m\k\x\9\d\7\5\e\5\b\a\c\k\n\7\q\h\a\1\v\x\l\0\y\d\a\0\q\d\4\t\m\c\e\2\1\x\o\z\o\e\k\t\0\t\v\7\c\t\z\a\q\b\b\y\2\y\m\p\b\u\p\r\l\h\q\7\s\s\0\5\8\f\b\v\v\j\x\1\k\b\e\i\d\m\5\k\o\8\c\m\e\o\v\u\q\9\v\w\g\e\h\l\j\c\a\4\8\o\8\8\m\6\c\c\4\y\y\t\a\7\8\e\x\g\s\2\4\0\b\9\l\x\1\3\k\6\d\g\a\0\q\t\n\e\d\4\z\q\t\5\a\8\p\d\5\1\e\d\6\n\c\3\l\l\3\f\f\d\k\j\2\f\1\r\b\i\7\9\m\q\y\m\p\4\z\e\s\9\2\x\y\d\l\8\q\j\3\q\z\4\l\r\s\o\u\3\l\7\k\6\m\v\r\x ]] 00:07:35.081 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:35.081 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:35.341 [2024-12-17 00:23:21.087466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.341 [2024-12-17 00:23:21.087586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72606 ] 00:07:35.341 [2024-12-17 00:23:21.225194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.341 [2024-12-17 00:23:21.260576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.341 [2024-12-17 00:23:21.292487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.341  [2024-12-17T00:23:21.603Z] Copying: 512/512 [B] (average 500 kBps) 00:07:35.600 00:07:35.600 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s5ym3qnum2bxnvxevngitmpvhroi0opemypd8x9dtokntpids0da8au8qswdqm7gi9qx3fpth24j4mjnrnsbxt7zihxgofk0kfu8os6i60392yo312llmp8j48f1lvjm02x3vodcdv3dbv0a67hnfay9z9jpwlg7tskazwe4i3v8z2fcurhj8oszjxy567527hke424zw60gzgmdq3bnsedcsx20n8vf1u0l3kdyukx8erml7o8t2dhl2pdolu92cvwef6uopm6g9vc53d7qkevvpn8o0evohwc6lk4nomkx9d75e5backn7qha1vxl0yda0qd4tmce21xozoekt0tv7ctzaqbby2ympbuprlhq7ss058fbvvjx1kbeidm5ko8cmeovuq9vwgehljca48o88m6cc4yyta78exgs240b9lx13k6dga0qtned4zqt5a8pd51ed6nc3ll3ffdkj2f1rbi79mqymp4zes92xydl8qj3qz4lrsou3l7k6mvrx == \s\5\y\m\3\q\n\u\m\2\b\x\n\v\x\e\v\n\g\i\t\m\p\v\h\r\o\i\0\o\p\e\m\y\p\d\8\x\9\d\t\o\k\n\t\p\i\d\s\0\d\a\8\a\u\8\q\s\w\d\q\m\7\g\i\9\q\x\3\f\p\t\h\2\4\j\4\m\j\n\r\n\s\b\x\t\7\z\i\h\x\g\o\f\k\0\k\f\u\8\o\s\6\i\6\0\3\9\2\y\o\3\1\2\l\l\m\p\8\j\4\8\f\1\l\v\j\m\0\2\x\3\v\o\d\c\d\v\3\d\b\v\0\a\6\7\h\n\f\a\y\9\z\9\j\p\w\l\g\7\t\s\k\a\z\w\e\4\i\3\v\8\z\2\f\c\u\r\h\j\8\o\s\z\j\x\y\5\6\7\5\2\7\h\k\e\4\2\4\z\w\6\0\g\z\g\m\d\q\3\b\n\s\e\d\c\s\x\2\0\n\8\v\f\1\u\0\l\3\k\d\y\u\k\x\8\e\r\m\l\7\o\8\t\2\d\h\l\2\p\d\o\l\u\9\2\c\v\w\e\f\6\u\o\p\m\6\g\9\v\c\5\3\d\7\q\k\e\v\v\p\n\8\o\0\e\v\o\h\w\c\6\l\k\4\n\o\m\k\x\9\d\7\5\e\5\b\a\c\k\n\7\q\h\a\1\v\x\l\0\y\d\a\0\q\d\4\t\m\c\e\2\1\x\o\z\o\e\k\t\0\t\v\7\c\t\z\a\q\b\b\y\2\y\m\p\b\u\p\r\l\h\q\7\s\s\0\5\8\f\b\v\v\j\x\1\k\b\e\i\d\m\5\k\o\8\c\m\e\o\v\u\q\9\v\w\g\e\h\l\j\c\a\4\8\o\8\8\m\6\c\c\4\y\y\t\a\7\8\e\x\g\s\2\4\0\b\9\l\x\1\3\k\6\d\g\a\0\q\t\n\e\d\4\z\q\t\5\a\8\p\d\5\1\e\d\6\n\c\3\l\l\3\f\f\d\k\j\2\f\1\r\b\i\7\9\m\q\y\m\p\4\z\e\s\9\2\x\y\d\l\8\q\j\3\q\z\4\l\r\s\o\u\3\l\7\k\6\m\v\r\x ]] 00:07:35.600 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:35.600 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:35.600 [2024-12-17 00:23:21.535089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.600 [2024-12-17 00:23:21.535213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72619 ] 00:07:35.859 [2024-12-17 00:23:21.672634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.859 [2024-12-17 00:23:21.708725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.859 [2024-12-17 00:23:21.735353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.859  [2024-12-17T00:23:22.121Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.118 00:07:36.119 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s5ym3qnum2bxnvxevngitmpvhroi0opemypd8x9dtokntpids0da8au8qswdqm7gi9qx3fpth24j4mjnrnsbxt7zihxgofk0kfu8os6i60392yo312llmp8j48f1lvjm02x3vodcdv3dbv0a67hnfay9z9jpwlg7tskazwe4i3v8z2fcurhj8oszjxy567527hke424zw60gzgmdq3bnsedcsx20n8vf1u0l3kdyukx8erml7o8t2dhl2pdolu92cvwef6uopm6g9vc53d7qkevvpn8o0evohwc6lk4nomkx9d75e5backn7qha1vxl0yda0qd4tmce21xozoekt0tv7ctzaqbby2ympbuprlhq7ss058fbvvjx1kbeidm5ko8cmeovuq9vwgehljca48o88m6cc4yyta78exgs240b9lx13k6dga0qtned4zqt5a8pd51ed6nc3ll3ffdkj2f1rbi79mqymp4zes92xydl8qj3qz4lrsou3l7k6mvrx == \s\5\y\m\3\q\n\u\m\2\b\x\n\v\x\e\v\n\g\i\t\m\p\v\h\r\o\i\0\o\p\e\m\y\p\d\8\x\9\d\t\o\k\n\t\p\i\d\s\0\d\a\8\a\u\8\q\s\w\d\q\m\7\g\i\9\q\x\3\f\p\t\h\2\4\j\4\m\j\n\r\n\s\b\x\t\7\z\i\h\x\g\o\f\k\0\k\f\u\8\o\s\6\i\6\0\3\9\2\y\o\3\1\2\l\l\m\p\8\j\4\8\f\1\l\v\j\m\0\2\x\3\v\o\d\c\d\v\3\d\b\v\0\a\6\7\h\n\f\a\y\9\z\9\j\p\w\l\g\7\t\s\k\a\z\w\e\4\i\3\v\8\z\2\f\c\u\r\h\j\8\o\s\z\j\x\y\5\6\7\5\2\7\h\k\e\4\2\4\z\w\6\0\g\z\g\m\d\q\3\b\n\s\e\d\c\s\x\2\0\n\8\v\f\1\u\0\l\3\k\d\y\u\k\x\8\e\r\m\l\7\o\8\t\2\d\h\l\2\p\d\o\l\u\9\2\c\v\w\e\f\6\u\o\p\m\6\g\9\v\c\5\3\d\7\q\k\e\v\v\p\n\8\o\0\e\v\o\h\w\c\6\l\k\4\n\o\m\k\x\9\d\7\5\e\5\b\a\c\k\n\7\q\h\a\1\v\x\l\0\y\d\a\0\q\d\4\t\m\c\e\2\1\x\o\z\o\e\k\t\0\t\v\7\c\t\z\a\q\b\b\y\2\y\m\p\b\u\p\r\l\h\q\7\s\s\0\5\8\f\b\v\v\j\x\1\k\b\e\i\d\m\5\k\o\8\c\m\e\o\v\u\q\9\v\w\g\e\h\l\j\c\a\4\8\o\8\8\m\6\c\c\4\y\y\t\a\7\8\e\x\g\s\2\4\0\b\9\l\x\1\3\k\6\d\g\a\0\q\t\n\e\d\4\z\q\t\5\a\8\p\d\5\1\e\d\6\n\c\3\l\l\3\f\f\d\k\j\2\f\1\r\b\i\7\9\m\q\y\m\p\4\z\e\s\9\2\x\y\d\l\8\q\j\3\q\z\4\l\r\s\o\u\3\l\7\k\6\m\v\r\x ]] 00:07:36.119 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.119 00:23:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:36.119 [2024-12-17 00:23:21.975348] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.119 [2024-12-17 00:23:21.975462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72621 ] 00:07:36.119 [2024-12-17 00:23:22.113249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.378 [2024-12-17 00:23:22.147775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.378 [2024-12-17 00:23:22.176086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.378  [2024-12-17T00:23:22.381Z] Copying: 512/512 [B] (average 250 kBps) 00:07:36.378 00:07:36.378 00:23:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s5ym3qnum2bxnvxevngitmpvhroi0opemypd8x9dtokntpids0da8au8qswdqm7gi9qx3fpth24j4mjnrnsbxt7zihxgofk0kfu8os6i60392yo312llmp8j48f1lvjm02x3vodcdv3dbv0a67hnfay9z9jpwlg7tskazwe4i3v8z2fcurhj8oszjxy567527hke424zw60gzgmdq3bnsedcsx20n8vf1u0l3kdyukx8erml7o8t2dhl2pdolu92cvwef6uopm6g9vc53d7qkevvpn8o0evohwc6lk4nomkx9d75e5backn7qha1vxl0yda0qd4tmce21xozoekt0tv7ctzaqbby2ympbuprlhq7ss058fbvvjx1kbeidm5ko8cmeovuq9vwgehljca48o88m6cc4yyta78exgs240b9lx13k6dga0qtned4zqt5a8pd51ed6nc3ll3ffdkj2f1rbi79mqymp4zes92xydl8qj3qz4lrsou3l7k6mvrx == \s\5\y\m\3\q\n\u\m\2\b\x\n\v\x\e\v\n\g\i\t\m\p\v\h\r\o\i\0\o\p\e\m\y\p\d\8\x\9\d\t\o\k\n\t\p\i\d\s\0\d\a\8\a\u\8\q\s\w\d\q\m\7\g\i\9\q\x\3\f\p\t\h\2\4\j\4\m\j\n\r\n\s\b\x\t\7\z\i\h\x\g\o\f\k\0\k\f\u\8\o\s\6\i\6\0\3\9\2\y\o\3\1\2\l\l\m\p\8\j\4\8\f\1\l\v\j\m\0\2\x\3\v\o\d\c\d\v\3\d\b\v\0\a\6\7\h\n\f\a\y\9\z\9\j\p\w\l\g\7\t\s\k\a\z\w\e\4\i\3\v\8\z\2\f\c\u\r\h\j\8\o\s\z\j\x\y\5\6\7\5\2\7\h\k\e\4\2\4\z\w\6\0\g\z\g\m\d\q\3\b\n\s\e\d\c\s\x\2\0\n\8\v\f\1\u\0\l\3\k\d\y\u\k\x\8\e\r\m\l\7\o\8\t\2\d\h\l\2\p\d\o\l\u\9\2\c\v\w\e\f\6\u\o\p\m\6\g\9\v\c\5\3\d\7\q\k\e\v\v\p\n\8\o\0\e\v\o\h\w\c\6\l\k\4\n\o\m\k\x\9\d\7\5\e\5\b\a\c\k\n\7\q\h\a\1\v\x\l\0\y\d\a\0\q\d\4\t\m\c\e\2\1\x\o\z\o\e\k\t\0\t\v\7\c\t\z\a\q\b\b\y\2\y\m\p\b\u\p\r\l\h\q\7\s\s\0\5\8\f\b\v\v\j\x\1\k\b\e\i\d\m\5\k\o\8\c\m\e\o\v\u\q\9\v\w\g\e\h\l\j\c\a\4\8\o\8\8\m\6\c\c\4\y\y\t\a\7\8\e\x\g\s\2\4\0\b\9\l\x\1\3\k\6\d\g\a\0\q\t\n\e\d\4\z\q\t\5\a\8\p\d\5\1\e\d\6\n\c\3\l\l\3\f\f\d\k\j\2\f\1\r\b\i\7\9\m\q\y\m\p\4\z\e\s\9\2\x\y\d\l\8\q\j\3\q\z\4\l\r\s\o\u\3\l\7\k\6\m\v\r\x ]] 00:07:36.378 00:07:36.378 real 0m3.538s 00:07:36.378 user 0m1.781s 00:07:36.378 sys 0m0.774s 00:07:36.378 ************************************ 00:07:36.378 END TEST dd_flags_misc_forced_aio 00:07:36.378 ************************************ 00:07:36.378 00:23:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.378 00:23:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 00:23:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:36.637 00:23:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:36.637 00:23:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:36.637 ************************************ 00:07:36.637 END TEST spdk_dd_posix 00:07:36.637 ************************************ 00:07:36.637 00:07:36.637 real 0m16.292s 00:07:36.637 user 0m7.018s 00:07:36.637 sys 0m4.528s 00:07:36.637 00:23:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.637 00:23:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 00:23:22 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:36.637 00:23:22 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.637 00:23:22 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.637 00:23:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.637 ************************************ 00:07:36.637 START TEST spdk_dd_malloc 00:07:36.637 ************************************ 00:07:36.637 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:36.637 * Looking for test storage... 00:07:36.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.637 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.638 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.897 --rc genhtml_branch_coverage=1 00:07:36.897 --rc genhtml_function_coverage=1 00:07:36.897 --rc genhtml_legend=1 00:07:36.897 --rc geninfo_all_blocks=1 00:07:36.897 --rc geninfo_unexecuted_blocks=1 00:07:36.897 00:07:36.897 ' 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.897 --rc genhtml_branch_coverage=1 00:07:36.897 --rc genhtml_function_coverage=1 00:07:36.897 --rc genhtml_legend=1 00:07:36.897 --rc geninfo_all_blocks=1 00:07:36.897 --rc geninfo_unexecuted_blocks=1 00:07:36.897 00:07:36.897 ' 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.897 --rc genhtml_branch_coverage=1 00:07:36.897 --rc genhtml_function_coverage=1 00:07:36.897 --rc genhtml_legend=1 00:07:36.897 --rc geninfo_all_blocks=1 00:07:36.897 --rc geninfo_unexecuted_blocks=1 00:07:36.897 00:07:36.897 ' 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:36.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.897 --rc genhtml_branch_coverage=1 00:07:36.897 --rc genhtml_function_coverage=1 00:07:36.897 --rc genhtml_legend=1 00:07:36.897 --rc geninfo_all_blocks=1 00:07:36.897 --rc geninfo_unexecuted_blocks=1 00:07:36.897 00:07:36.897 ' 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:36.897 ************************************ 00:07:36.897 START TEST dd_malloc_copy 00:07:36.897 ************************************ 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:36.897 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:36.898 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:36.898 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:36.898 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:36.898 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:36.898 00:23:22 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:36.898 [2024-12-17 00:23:22.716050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.898 [2024-12-17 00:23:22.716186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72703 ] 00:07:36.898 { 00:07:36.898 "subsystems": [ 00:07:36.898 { 00:07:36.898 "subsystem": "bdev", 00:07:36.898 "config": [ 00:07:36.898 { 00:07:36.898 "params": { 00:07:36.898 "block_size": 512, 00:07:36.898 "num_blocks": 1048576, 00:07:36.898 "name": "malloc0" 00:07:36.898 }, 00:07:36.898 "method": "bdev_malloc_create" 00:07:36.898 }, 00:07:36.898 { 00:07:36.898 "params": { 00:07:36.898 "block_size": 512, 00:07:36.898 "num_blocks": 1048576, 00:07:36.898 "name": "malloc1" 00:07:36.898 }, 00:07:36.898 "method": "bdev_malloc_create" 00:07:36.898 }, 00:07:36.898 { 00:07:36.898 "method": "bdev_wait_for_examine" 00:07:36.898 } 00:07:36.898 ] 00:07:36.898 } 00:07:36.898 ] 00:07:36.898 } 00:07:36.898 [2024-12-17 00:23:22.851827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.898 [2024-12-17 00:23:22.888737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.156 [2024-12-17 00:23:22.920766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.533  [2024-12-17T00:23:25.473Z] Copying: 203/512 [MB] (203 MBps) [2024-12-17T00:23:25.731Z] Copying: 426/512 [MB] (223 MBps) [2024-12-17T00:23:25.991Z] Copying: 512/512 [MB] (average 213 MBps) 00:07:39.988 00:07:39.988 00:23:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:39.988 00:23:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:39.988 00:23:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:39.988 00:23:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.988 [2024-12-17 00:23:25.847075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:39.988 [2024-12-17 00:23:25.847172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72745 ] 00:07:39.988 { 00:07:39.988 "subsystems": [ 00:07:39.988 { 00:07:39.988 "subsystem": "bdev", 00:07:39.988 "config": [ 00:07:39.988 { 00:07:39.988 "params": { 00:07:39.988 "block_size": 512, 00:07:39.988 "num_blocks": 1048576, 00:07:39.988 "name": "malloc0" 00:07:39.988 }, 00:07:39.988 "method": "bdev_malloc_create" 00:07:39.988 }, 00:07:39.988 { 00:07:39.988 "params": { 00:07:39.988 "block_size": 512, 00:07:39.988 "num_blocks": 1048576, 00:07:39.988 "name": "malloc1" 00:07:39.988 }, 00:07:39.988 "method": "bdev_malloc_create" 00:07:39.988 }, 00:07:39.988 { 00:07:39.988 "method": "bdev_wait_for_examine" 00:07:39.988 } 00:07:39.988 ] 00:07:39.988 } 00:07:39.988 ] 00:07:39.988 } 00:07:39.988 [2024-12-17 00:23:25.982777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.246 [2024-12-17 00:23:26.015961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.246 [2024-12-17 00:23:26.046587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.622  [2024-12-17T00:23:28.561Z] Copying: 217/512 [MB] (217 MBps) [2024-12-17T00:23:28.820Z] Copying: 437/512 [MB] (220 MBps) [2024-12-17T00:23:29.079Z] Copying: 512/512 [MB] (average 216 MBps) 00:07:43.076 00:07:43.076 00:07:43.076 real 0m6.263s 00:07:43.076 user 0m5.633s 00:07:43.076 sys 0m0.481s 00:07:43.076 00:23:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.076 ************************************ 00:07:43.076 END TEST dd_malloc_copy 00:07:43.076 00:23:28 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:43.076 ************************************ 00:07:43.076 00:07:43.076 real 0m6.503s 00:07:43.076 user 0m5.771s 00:07:43.076 sys 0m0.592s 00:07:43.076 00:23:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.076 00:23:28 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:43.076 ************************************ 00:07:43.076 END TEST spdk_dd_malloc 00:07:43.076 ************************************ 00:07:43.076 00:23:29 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:43.076 00:23:29 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:43.076 00:23:29 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.076 00:23:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:43.076 ************************************ 00:07:43.076 START TEST spdk_dd_bdev_to_bdev 00:07:43.076 ************************************ 00:07:43.076 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:43.336 * Looking for test storage... 00:07:43.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:43.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.336 --rc genhtml_branch_coverage=1 00:07:43.336 --rc genhtml_function_coverage=1 00:07:43.336 --rc genhtml_legend=1 00:07:43.336 --rc geninfo_all_blocks=1 00:07:43.336 --rc geninfo_unexecuted_blocks=1 00:07:43.336 00:07:43.336 ' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:43.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.336 --rc genhtml_branch_coverage=1 00:07:43.336 --rc genhtml_function_coverage=1 00:07:43.336 --rc genhtml_legend=1 00:07:43.336 --rc geninfo_all_blocks=1 00:07:43.336 --rc geninfo_unexecuted_blocks=1 00:07:43.336 00:07:43.336 ' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:43.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.336 --rc genhtml_branch_coverage=1 00:07:43.336 --rc genhtml_function_coverage=1 00:07:43.336 --rc genhtml_legend=1 00:07:43.336 --rc geninfo_all_blocks=1 00:07:43.336 --rc geninfo_unexecuted_blocks=1 00:07:43.336 00:07:43.336 ' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:43.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.336 --rc genhtml_branch_coverage=1 00:07:43.336 --rc genhtml_function_coverage=1 00:07:43.336 --rc genhtml_legend=1 00:07:43.336 --rc geninfo_all_blocks=1 00:07:43.336 --rc geninfo_unexecuted_blocks=1 00:07:43.336 00:07:43.336 ' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.336 ************************************ 00:07:43.336 START TEST dd_inflate_file 00:07:43.336 ************************************ 00:07:43.336 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:43.336 [2024-12-17 00:23:29.290788] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.336 [2024-12-17 00:23:29.290894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72852 ] 00:07:43.597 [2024-12-17 00:23:29.430428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.597 [2024-12-17 00:23:29.472549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.597 [2024-12-17 00:23:29.506855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.597  [2024-12-17T00:23:29.863Z] Copying: 64/64 [MB] (average 1523 MBps) 00:07:43.860 00:07:43.860 00:07:43.860 real 0m0.471s 00:07:43.860 user 0m0.250s 00:07:43.860 sys 0m0.244s 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:43.860 ************************************ 00:07:43.860 END TEST dd_inflate_file 00:07:43.860 ************************************ 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:43.860 ************************************ 00:07:43.860 START TEST dd_copy_to_out_bdev 00:07:43.860 ************************************ 00:07:43.860 00:23:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:43.860 { 00:07:43.860 "subsystems": [ 00:07:43.860 { 00:07:43.860 "subsystem": "bdev", 00:07:43.860 "config": [ 00:07:43.860 { 00:07:43.860 "params": { 00:07:43.860 "trtype": "pcie", 00:07:43.860 "traddr": "0000:00:10.0", 00:07:43.860 "name": "Nvme0" 00:07:43.860 }, 00:07:43.860 "method": "bdev_nvme_attach_controller" 00:07:43.860 }, 00:07:43.860 { 00:07:43.860 "params": { 00:07:43.860 "trtype": "pcie", 00:07:43.860 "traddr": "0000:00:11.0", 00:07:43.860 "name": "Nvme1" 00:07:43.860 }, 00:07:43.860 "method": "bdev_nvme_attach_controller" 00:07:43.860 }, 00:07:43.860 { 00:07:43.860 "method": "bdev_wait_for_examine" 00:07:43.860 } 00:07:43.860 ] 00:07:43.860 } 00:07:43.860 ] 00:07:43.860 } 00:07:43.860 [2024-12-17 00:23:29.822827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.860 [2024-12-17 00:23:29.822930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72891 ] 00:07:44.118 [2024-12-17 00:23:29.965941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.118 [2024-12-17 00:23:30.006950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.118 [2024-12-17 00:23:30.040189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.496  [2024-12-17T00:23:31.499Z] Copying: 50/64 [MB] (50 MBps) [2024-12-17T00:23:31.758Z] Copying: 64/64 [MB] (average 50 MBps) 00:07:45.755 00:07:45.755 00:07:45.755 real 0m1.871s 00:07:45.755 user 0m1.675s 00:07:45.755 sys 0m1.537s 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:45.755 ************************************ 00:07:45.755 END TEST dd_copy_to_out_bdev 00:07:45.755 ************************************ 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:45.755 ************************************ 00:07:45.755 START TEST dd_offset_magic 00:07:45.755 ************************************ 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:45.755 00:23:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:45.755 [2024-12-17 00:23:31.741603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.756 [2024-12-17 00:23:31.741699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72931 ] 00:07:45.756 { 00:07:45.756 "subsystems": [ 00:07:45.756 { 00:07:45.756 "subsystem": "bdev", 00:07:45.756 "config": [ 00:07:45.756 { 00:07:45.756 "params": { 00:07:45.756 "trtype": "pcie", 00:07:45.756 "traddr": "0000:00:10.0", 00:07:45.756 "name": "Nvme0" 00:07:45.756 }, 00:07:45.756 "method": "bdev_nvme_attach_controller" 00:07:45.756 }, 00:07:45.756 { 00:07:45.756 "params": { 00:07:45.756 "trtype": "pcie", 00:07:45.756 "traddr": "0000:00:11.0", 00:07:45.756 "name": "Nvme1" 00:07:45.756 }, 00:07:45.756 "method": "bdev_nvme_attach_controller" 00:07:45.756 }, 00:07:45.756 { 00:07:45.756 "method": "bdev_wait_for_examine" 00:07:45.756 } 00:07:45.756 ] 00:07:45.756 } 00:07:45.756 ] 00:07:45.756 } 00:07:46.015 [2024-12-17 00:23:31.876936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.015 [2024-12-17 00:23:31.907612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.015 [2024-12-17 00:23:31.933983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.274  [2024-12-17T00:23:32.536Z] Copying: 65/65 [MB] (average 1120 MBps) 00:07:46.533 00:07:46.533 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:46.533 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:46.533 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:46.533 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:46.533 [2024-12-17 00:23:32.374631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.533 [2024-12-17 00:23:32.375188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72945 ] 00:07:46.533 { 00:07:46.533 "subsystems": [ 00:07:46.533 { 00:07:46.533 "subsystem": "bdev", 00:07:46.533 "config": [ 00:07:46.533 { 00:07:46.533 "params": { 00:07:46.533 "trtype": "pcie", 00:07:46.533 "traddr": "0000:00:10.0", 00:07:46.533 "name": "Nvme0" 00:07:46.533 }, 00:07:46.533 "method": "bdev_nvme_attach_controller" 00:07:46.533 }, 00:07:46.533 { 00:07:46.533 "params": { 00:07:46.533 "trtype": "pcie", 00:07:46.533 "traddr": "0000:00:11.0", 00:07:46.533 "name": "Nvme1" 00:07:46.533 }, 00:07:46.533 "method": "bdev_nvme_attach_controller" 00:07:46.533 }, 00:07:46.533 { 00:07:46.533 "method": "bdev_wait_for_examine" 00:07:46.533 } 00:07:46.533 ] 00:07:46.533 } 00:07:46.533 ] 00:07:46.533 } 00:07:46.533 [2024-12-17 00:23:32.511068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.793 [2024-12-17 00:23:32.543623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.793 [2024-12-17 00:23:32.574328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.793  [2024-12-17T00:23:33.055Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:47.052 00:07:47.052 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:47.052 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:47.052 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:47.052 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:47.052 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:47.052 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:47.052 00:23:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:47.052 [2024-12-17 00:23:32.909840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.052 [2024-12-17 00:23:32.909929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72967 ] 00:07:47.052 { 00:07:47.052 "subsystems": [ 00:07:47.052 { 00:07:47.052 "subsystem": "bdev", 00:07:47.052 "config": [ 00:07:47.052 { 00:07:47.052 "params": { 00:07:47.052 "trtype": "pcie", 00:07:47.052 "traddr": "0000:00:10.0", 00:07:47.052 "name": "Nvme0" 00:07:47.052 }, 00:07:47.052 "method": "bdev_nvme_attach_controller" 00:07:47.052 }, 00:07:47.052 { 00:07:47.052 "params": { 00:07:47.052 "trtype": "pcie", 00:07:47.052 "traddr": "0000:00:11.0", 00:07:47.052 "name": "Nvme1" 00:07:47.052 }, 00:07:47.052 "method": "bdev_nvme_attach_controller" 00:07:47.052 }, 00:07:47.052 { 00:07:47.052 "method": "bdev_wait_for_examine" 00:07:47.052 } 00:07:47.052 ] 00:07:47.052 } 00:07:47.052 ] 00:07:47.052 } 00:07:47.052 [2024-12-17 00:23:33.045052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.311 [2024-12-17 00:23:33.077684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.311 [2024-12-17 00:23:33.103792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.311  [2024-12-17T00:23:33.573Z] Copying: 65/65 [MB] (average 1203 MBps) 00:07:47.570 00:07:47.570 00:23:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:47.570 00:23:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:47.570 00:23:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:47.570 00:23:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:47.570 [2024-12-17 00:23:33.533063] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.570 [2024-12-17 00:23:33.533165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72976 ] 00:07:47.570 { 00:07:47.570 "subsystems": [ 00:07:47.570 { 00:07:47.570 "subsystem": "bdev", 00:07:47.570 "config": [ 00:07:47.570 { 00:07:47.570 "params": { 00:07:47.570 "trtype": "pcie", 00:07:47.570 "traddr": "0000:00:10.0", 00:07:47.570 "name": "Nvme0" 00:07:47.570 }, 00:07:47.570 "method": "bdev_nvme_attach_controller" 00:07:47.570 }, 00:07:47.570 { 00:07:47.570 "params": { 00:07:47.570 "trtype": "pcie", 00:07:47.570 "traddr": "0000:00:11.0", 00:07:47.570 "name": "Nvme1" 00:07:47.570 }, 00:07:47.570 "method": "bdev_nvme_attach_controller" 00:07:47.570 }, 00:07:47.570 { 00:07:47.570 "method": "bdev_wait_for_examine" 00:07:47.570 } 00:07:47.570 ] 00:07:47.570 } 00:07:47.570 ] 00:07:47.570 } 00:07:47.830 [2024-12-17 00:23:33.672333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.830 [2024-12-17 00:23:33.717256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.830 [2024-12-17 00:23:33.754640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.089  [2024-12-17T00:23:34.092Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:48.089 00:07:48.089 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:48.089 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:48.089 00:07:48.089 real 0m2.376s 00:07:48.089 user 0m1.750s 00:07:48.089 sys 0m0.621s 00:07:48.089 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.089 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:48.089 ************************************ 00:07:48.089 END TEST dd_offset_magic 00:07:48.089 ************************************ 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:48.348 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.348 [2024-12-17 00:23:34.157087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.348 [2024-12-17 00:23:34.157694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73013 ] 00:07:48.348 { 00:07:48.348 "subsystems": [ 00:07:48.348 { 00:07:48.348 "subsystem": "bdev", 00:07:48.348 "config": [ 00:07:48.348 { 00:07:48.348 "params": { 00:07:48.348 "trtype": "pcie", 00:07:48.348 "traddr": "0000:00:10.0", 00:07:48.348 "name": "Nvme0" 00:07:48.348 }, 00:07:48.348 "method": "bdev_nvme_attach_controller" 00:07:48.348 }, 00:07:48.348 { 00:07:48.348 "params": { 00:07:48.348 "trtype": "pcie", 00:07:48.348 "traddr": "0000:00:11.0", 00:07:48.348 "name": "Nvme1" 00:07:48.348 }, 00:07:48.348 "method": "bdev_nvme_attach_controller" 00:07:48.348 }, 00:07:48.348 { 00:07:48.348 "method": "bdev_wait_for_examine" 00:07:48.348 } 00:07:48.348 ] 00:07:48.348 } 00:07:48.348 ] 00:07:48.348 } 00:07:48.348 [2024-12-17 00:23:34.293431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.348 [2024-12-17 00:23:34.325818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.608 [2024-12-17 00:23:34.354618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.608  [2024-12-17T00:23:34.870Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:48.867 00:07:48.867 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:48.867 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:48.867 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.867 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:48.868 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:48.868 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:48.868 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:48.868 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:48.868 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:48.868 00:23:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.868 [2024-12-17 00:23:34.689710] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.868 [2024-12-17 00:23:34.689794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73029 ] 00:07:48.868 { 00:07:48.868 "subsystems": [ 00:07:48.868 { 00:07:48.868 "subsystem": "bdev", 00:07:48.868 "config": [ 00:07:48.868 { 00:07:48.868 "params": { 00:07:48.868 "trtype": "pcie", 00:07:48.868 "traddr": "0000:00:10.0", 00:07:48.868 "name": "Nvme0" 00:07:48.868 }, 00:07:48.868 "method": "bdev_nvme_attach_controller" 00:07:48.868 }, 00:07:48.868 { 00:07:48.868 "params": { 00:07:48.868 "trtype": "pcie", 00:07:48.868 "traddr": "0000:00:11.0", 00:07:48.868 "name": "Nvme1" 00:07:48.868 }, 00:07:48.868 "method": "bdev_nvme_attach_controller" 00:07:48.868 }, 00:07:48.868 { 00:07:48.868 "method": "bdev_wait_for_examine" 00:07:48.868 } 00:07:48.868 ] 00:07:48.868 } 00:07:48.868 ] 00:07:48.868 } 00:07:48.868 [2024-12-17 00:23:34.817997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.868 [2024-12-17 00:23:34.851333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.127 [2024-12-17 00:23:34.881889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.127  [2024-12-17T00:23:35.388Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:49.385 00:07:49.385 00:23:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:49.385 00:07:49.385 real 0m6.177s 00:07:49.385 user 0m4.643s 00:07:49.385 sys 0m2.936s 00:07:49.385 00:23:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.385 00:23:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.385 ************************************ 00:07:49.385 END TEST spdk_dd_bdev_to_bdev 00:07:49.385 ************************************ 00:07:49.385 00:23:35 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:49.385 00:23:35 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:49.385 00:23:35 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.385 00:23:35 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.385 00:23:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:49.385 ************************************ 00:07:49.385 START TEST spdk_dd_uring 00:07:49.385 ************************************ 00:07:49.385 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:49.385 * Looking for test storage... 00:07:49.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.385 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:49.385 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:07:49.385 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.644 --rc genhtml_branch_coverage=1 00:07:49.644 --rc genhtml_function_coverage=1 00:07:49.644 --rc genhtml_legend=1 00:07:49.644 --rc geninfo_all_blocks=1 00:07:49.644 --rc geninfo_unexecuted_blocks=1 00:07:49.644 00:07:49.644 ' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.644 --rc genhtml_branch_coverage=1 00:07:49.644 --rc genhtml_function_coverage=1 00:07:49.644 --rc genhtml_legend=1 00:07:49.644 --rc geninfo_all_blocks=1 00:07:49.644 --rc geninfo_unexecuted_blocks=1 00:07:49.644 00:07:49.644 ' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.644 --rc genhtml_branch_coverage=1 00:07:49.644 --rc genhtml_function_coverage=1 00:07:49.644 --rc genhtml_legend=1 00:07:49.644 --rc geninfo_all_blocks=1 00:07:49.644 --rc geninfo_unexecuted_blocks=1 00:07:49.644 00:07:49.644 ' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.644 --rc genhtml_branch_coverage=1 00:07:49.644 --rc genhtml_function_coverage=1 00:07:49.644 --rc genhtml_legend=1 00:07:49.644 --rc geninfo_all_blocks=1 00:07:49.644 --rc geninfo_unexecuted_blocks=1 00:07:49.644 00:07:49.644 ' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:49.644 ************************************ 00:07:49.644 START TEST dd_uring_copy 00:07:49.644 ************************************ 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:49.644 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:49.645 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:49.645 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:49.645 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=pw30zsux8b7nvulkvmt22imexssdvg18nh125gno9yawtsea0p2luifa9lb17bzfb4s20ad24yfyxzkqb4wg48kabjt0rbilc56zo2lbaa05xss9qghgaitfxrxr4pchnxjxjbapsgqpkkcr3tko8sxx7by6kaqjbwupcv6peojeis2udhikcj3gp13mtzyohkei6ol4rkxmhj02najdy26a0f3kustuawhpujtojgk7ukeyckr2m7plcn9015ecfj77m76jmc07jz5vrjf0w4c5wfirq227heuxcaogb0v637dwmvk7gr8rgj4vtf30lp5xpb6nte0jrte0vaefhyl2yd0vjl8yqbsbf36geaof0pxwslzri493vfqd071j7pmvhh8fvmqvz2ziex60bgwdqlzxbg6ni7v783q9p6nsg8v8dqwzlp415eu022imcgcvxunaud056au9wk1nhawsl3g0ilr0mj0pzxplyoxrdjebv9gewlnjffst04muvypg54h1puqucdq8iofod40paj7keg0ppmnpsiycq96o40b6ab2s1pa4zc5ynbefjqwwa0m81b9yv15qzlxi1ynswq198v425pzfxd9br8151nx4nvjv8ats6kt2cqowb5vbilnwx2vn3c48xihqc56grvjmadfwff3hmfdfar7ks1lm4kwq2imdjax450tlfgc2ru1amege5zd299t2sil5wjwkxvzce55mqp37amwsneo4bei89mm5o6o57cfxytuifp7u1vnw99odygzrvae1dsgtmqd3araryu550ofe8yr588aixamba28fz8hxwikvv2pspcezx1nykjetj4d44xs6fh02ix0ih2o6479sne51zfs3eo1fjrdbp35zevhdpd1b3wlstadvewi8i8aku31q7aylo5h31rtuxgaon3c5frzz7efwvs0qyeeiqycymyk7oprjp4jmpsr87t6bbsba2hfxari6v9a5m4fux2j1gnflfty29idudmvq 00:07:49.645 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo pw30zsux8b7nvulkvmt22imexssdvg18nh125gno9yawtsea0p2luifa9lb17bzfb4s20ad24yfyxzkqb4wg48kabjt0rbilc56zo2lbaa05xss9qghgaitfxrxr4pchnxjxjbapsgqpkkcr3tko8sxx7by6kaqjbwupcv6peojeis2udhikcj3gp13mtzyohkei6ol4rkxmhj02najdy26a0f3kustuawhpujtojgk7ukeyckr2m7plcn9015ecfj77m76jmc07jz5vrjf0w4c5wfirq227heuxcaogb0v637dwmvk7gr8rgj4vtf30lp5xpb6nte0jrte0vaefhyl2yd0vjl8yqbsbf36geaof0pxwslzri493vfqd071j7pmvhh8fvmqvz2ziex60bgwdqlzxbg6ni7v783q9p6nsg8v8dqwzlp415eu022imcgcvxunaud056au9wk1nhawsl3g0ilr0mj0pzxplyoxrdjebv9gewlnjffst04muvypg54h1puqucdq8iofod40paj7keg0ppmnpsiycq96o40b6ab2s1pa4zc5ynbefjqwwa0m81b9yv15qzlxi1ynswq198v425pzfxd9br8151nx4nvjv8ats6kt2cqowb5vbilnwx2vn3c48xihqc56grvjmadfwff3hmfdfar7ks1lm4kwq2imdjax450tlfgc2ru1amege5zd299t2sil5wjwkxvzce55mqp37amwsneo4bei89mm5o6o57cfxytuifp7u1vnw99odygzrvae1dsgtmqd3araryu550ofe8yr588aixamba28fz8hxwikvv2pspcezx1nykjetj4d44xs6fh02ix0ih2o6479sne51zfs3eo1fjrdbp35zevhdpd1b3wlstadvewi8i8aku31q7aylo5h31rtuxgaon3c5frzz7efwvs0qyeeiqycymyk7oprjp4jmpsr87t6bbsba2hfxari6v9a5m4fux2j1gnflfty29idudmvq 00:07:49.645 00:23:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:49.645 [2024-12-17 00:23:35.525207] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:49.645 [2024-12-17 00:23:35.525340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73101 ] 00:07:49.904 [2024-12-17 00:23:35.657536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.904 [2024-12-17 00:23:35.690240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.904 [2024-12-17 00:23:35.717543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.163  [2024-12-17T00:23:36.425Z] Copying: 511/511 [MB] (average 1741 MBps) 00:07:50.422 00:07:50.422 00:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:50.422 00:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:50.422 00:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:50.422 00:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:50.681 [2024-12-17 00:23:36.425059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.681 [2024-12-17 00:23:36.425206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:07:50.681 { 00:07:50.681 "subsystems": [ 00:07:50.681 { 00:07:50.681 "subsystem": "bdev", 00:07:50.681 "config": [ 00:07:50.681 { 00:07:50.681 "params": { 00:07:50.681 "block_size": 512, 00:07:50.681 "num_blocks": 1048576, 00:07:50.681 "name": "malloc0" 00:07:50.681 }, 00:07:50.681 "method": "bdev_malloc_create" 00:07:50.681 }, 00:07:50.681 { 00:07:50.681 "params": { 00:07:50.681 "filename": "/dev/zram1", 00:07:50.681 "name": "uring0" 00:07:50.681 }, 00:07:50.681 "method": "bdev_uring_create" 00:07:50.681 }, 00:07:50.681 { 00:07:50.681 "method": "bdev_wait_for_examine" 00:07:50.681 } 00:07:50.681 ] 00:07:50.681 } 00:07:50.681 ] 00:07:50.681 } 00:07:50.681 [2024-12-17 00:23:36.564014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.681 [2024-12-17 00:23:36.597735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.681 [2024-12-17 00:23:36.625534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.058  [2024-12-17T00:23:38.998Z] Copying: 239/512 [MB] (239 MBps) [2024-12-17T00:23:38.998Z] Copying: 483/512 [MB] (243 MBps) [2024-12-17T00:23:39.257Z] Copying: 512/512 [MB] (average 241 MBps) 00:07:53.254 00:07:53.254 00:23:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:53.254 00:23:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:53.254 00:23:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:53.254 00:23:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:53.254 [2024-12-17 00:23:39.161358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:53.254 [2024-12-17 00:23:39.161474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73160 ] 00:07:53.254 { 00:07:53.254 "subsystems": [ 00:07:53.254 { 00:07:53.254 "subsystem": "bdev", 00:07:53.254 "config": [ 00:07:53.254 { 00:07:53.254 "params": { 00:07:53.254 "block_size": 512, 00:07:53.254 "num_blocks": 1048576, 00:07:53.254 "name": "malloc0" 00:07:53.254 }, 00:07:53.254 "method": "bdev_malloc_create" 00:07:53.254 }, 00:07:53.254 { 00:07:53.254 "params": { 00:07:53.254 "filename": "/dev/zram1", 00:07:53.254 "name": "uring0" 00:07:53.254 }, 00:07:53.254 "method": "bdev_uring_create" 00:07:53.254 }, 00:07:53.254 { 00:07:53.254 "method": "bdev_wait_for_examine" 00:07:53.254 } 00:07:53.254 ] 00:07:53.254 } 00:07:53.254 ] 00:07:53.254 } 00:07:53.514 [2024-12-17 00:23:39.295990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.514 [2024-12-17 00:23:39.328851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.514 [2024-12-17 00:23:39.360083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.892  [2024-12-17T00:23:41.832Z] Copying: 179/512 [MB] (179 MBps) [2024-12-17T00:23:42.401Z] Copying: 372/512 [MB] (193 MBps) [2024-12-17T00:23:42.660Z] Copying: 512/512 [MB] (average 183 MBps) 00:07:56.657 00:07:56.657 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:56.658 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ pw30zsux8b7nvulkvmt22imexssdvg18nh125gno9yawtsea0p2luifa9lb17bzfb4s20ad24yfyxzkqb4wg48kabjt0rbilc56zo2lbaa05xss9qghgaitfxrxr4pchnxjxjbapsgqpkkcr3tko8sxx7by6kaqjbwupcv6peojeis2udhikcj3gp13mtzyohkei6ol4rkxmhj02najdy26a0f3kustuawhpujtojgk7ukeyckr2m7plcn9015ecfj77m76jmc07jz5vrjf0w4c5wfirq227heuxcaogb0v637dwmvk7gr8rgj4vtf30lp5xpb6nte0jrte0vaefhyl2yd0vjl8yqbsbf36geaof0pxwslzri493vfqd071j7pmvhh8fvmqvz2ziex60bgwdqlzxbg6ni7v783q9p6nsg8v8dqwzlp415eu022imcgcvxunaud056au9wk1nhawsl3g0ilr0mj0pzxplyoxrdjebv9gewlnjffst04muvypg54h1puqucdq8iofod40paj7keg0ppmnpsiycq96o40b6ab2s1pa4zc5ynbefjqwwa0m81b9yv15qzlxi1ynswq198v425pzfxd9br8151nx4nvjv8ats6kt2cqowb5vbilnwx2vn3c48xihqc56grvjmadfwff3hmfdfar7ks1lm4kwq2imdjax450tlfgc2ru1amege5zd299t2sil5wjwkxvzce55mqp37amwsneo4bei89mm5o6o57cfxytuifp7u1vnw99odygzrvae1dsgtmqd3araryu550ofe8yr588aixamba28fz8hxwikvv2pspcezx1nykjetj4d44xs6fh02ix0ih2o6479sne51zfs3eo1fjrdbp35zevhdpd1b3wlstadvewi8i8aku31q7aylo5h31rtuxgaon3c5frzz7efwvs0qyeeiqycymyk7oprjp4jmpsr87t6bbsba2hfxari6v9a5m4fux2j1gnflfty29idudmvq == \p\w\3\0\z\s\u\x\8\b\7\n\v\u\l\k\v\m\t\2\2\i\m\e\x\s\s\d\v\g\1\8\n\h\1\2\5\g\n\o\9\y\a\w\t\s\e\a\0\p\2\l\u\i\f\a\9\l\b\1\7\b\z\f\b\4\s\2\0\a\d\2\4\y\f\y\x\z\k\q\b\4\w\g\4\8\k\a\b\j\t\0\r\b\i\l\c\5\6\z\o\2\l\b\a\a\0\5\x\s\s\9\q\g\h\g\a\i\t\f\x\r\x\r\4\p\c\h\n\x\j\x\j\b\a\p\s\g\q\p\k\k\c\r\3\t\k\o\8\s\x\x\7\b\y\6\k\a\q\j\b\w\u\p\c\v\6\p\e\o\j\e\i\s\2\u\d\h\i\k\c\j\3\g\p\1\3\m\t\z\y\o\h\k\e\i\6\o\l\4\r\k\x\m\h\j\0\2\n\a\j\d\y\2\6\a\0\f\3\k\u\s\t\u\a\w\h\p\u\j\t\o\j\g\k\7\u\k\e\y\c\k\r\2\m\7\p\l\c\n\9\0\1\5\e\c\f\j\7\7\m\7\6\j\m\c\0\7\j\z\5\v\r\j\f\0\w\4\c\5\w\f\i\r\q\2\2\7\h\e\u\x\c\a\o\g\b\0\v\6\3\7\d\w\m\v\k\7\g\r\8\r\g\j\4\v\t\f\3\0\l\p\5\x\p\b\6\n\t\e\0\j\r\t\e\0\v\a\e\f\h\y\l\2\y\d\0\v\j\l\8\y\q\b\s\b\f\3\6\g\e\a\o\f\0\p\x\w\s\l\z\r\i\4\9\3\v\f\q\d\0\7\1\j\7\p\m\v\h\h\8\f\v\m\q\v\z\2\z\i\e\x\6\0\b\g\w\d\q\l\z\x\b\g\6\n\i\7\v\7\8\3\q\9\p\6\n\s\g\8\v\8\d\q\w\z\l\p\4\1\5\e\u\0\2\2\i\m\c\g\c\v\x\u\n\a\u\d\0\5\6\a\u\9\w\k\1\n\h\a\w\s\l\3\g\0\i\l\r\0\m\j\0\p\z\x\p\l\y\o\x\r\d\j\e\b\v\9\g\e\w\l\n\j\f\f\s\t\0\4\m\u\v\y\p\g\5\4\h\1\p\u\q\u\c\d\q\8\i\o\f\o\d\4\0\p\a\j\7\k\e\g\0\p\p\m\n\p\s\i\y\c\q\9\6\o\4\0\b\6\a\b\2\s\1\p\a\4\z\c\5\y\n\b\e\f\j\q\w\w\a\0\m\8\1\b\9\y\v\1\5\q\z\l\x\i\1\y\n\s\w\q\1\9\8\v\4\2\5\p\z\f\x\d\9\b\r\8\1\5\1\n\x\4\n\v\j\v\8\a\t\s\6\k\t\2\c\q\o\w\b\5\v\b\i\l\n\w\x\2\v\n\3\c\4\8\x\i\h\q\c\5\6\g\r\v\j\m\a\d\f\w\f\f\3\h\m\f\d\f\a\r\7\k\s\1\l\m\4\k\w\q\2\i\m\d\j\a\x\4\5\0\t\l\f\g\c\2\r\u\1\a\m\e\g\e\5\z\d\2\9\9\t\2\s\i\l\5\w\j\w\k\x\v\z\c\e\5\5\m\q\p\3\7\a\m\w\s\n\e\o\4\b\e\i\8\9\m\m\5\o\6\o\5\7\c\f\x\y\t\u\i\f\p\7\u\1\v\n\w\9\9\o\d\y\g\z\r\v\a\e\1\d\s\g\t\m\q\d\3\a\r\a\r\y\u\5\5\0\o\f\e\8\y\r\5\8\8\a\i\x\a\m\b\a\2\8\f\z\8\h\x\w\i\k\v\v\2\p\s\p\c\e\z\x\1\n\y\k\j\e\t\j\4\d\4\4\x\s\6\f\h\0\2\i\x\0\i\h\2\o\6\4\7\9\s\n\e\5\1\z\f\s\3\e\o\1\f\j\r\d\b\p\3\5\z\e\v\h\d\p\d\1\b\3\w\l\s\t\a\d\v\e\w\i\8\i\8\a\k\u\3\1\q\7\a\y\l\o\5\h\3\1\r\t\u\x\g\a\o\n\3\c\5\f\r\z\z\7\e\f\w\v\s\0\q\y\e\e\i\q\y\c\y\m\y\k\7\o\p\r\j\p\4\j\m\p\s\r\8\7\t\6\b\b\s\b\a\2\h\f\x\a\r\i\6\v\9\a\5\m\4\f\u\x\2\j\1\g\n\f\l\f\t\y\2\9\i\d\u\d\m\v\q ]] 00:07:56.658 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:56.658 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ pw30zsux8b7nvulkvmt22imexssdvg18nh125gno9yawtsea0p2luifa9lb17bzfb4s20ad24yfyxzkqb4wg48kabjt0rbilc56zo2lbaa05xss9qghgaitfxrxr4pchnxjxjbapsgqpkkcr3tko8sxx7by6kaqjbwupcv6peojeis2udhikcj3gp13mtzyohkei6ol4rkxmhj02najdy26a0f3kustuawhpujtojgk7ukeyckr2m7plcn9015ecfj77m76jmc07jz5vrjf0w4c5wfirq227heuxcaogb0v637dwmvk7gr8rgj4vtf30lp5xpb6nte0jrte0vaefhyl2yd0vjl8yqbsbf36geaof0pxwslzri493vfqd071j7pmvhh8fvmqvz2ziex60bgwdqlzxbg6ni7v783q9p6nsg8v8dqwzlp415eu022imcgcvxunaud056au9wk1nhawsl3g0ilr0mj0pzxplyoxrdjebv9gewlnjffst04muvypg54h1puqucdq8iofod40paj7keg0ppmnpsiycq96o40b6ab2s1pa4zc5ynbefjqwwa0m81b9yv15qzlxi1ynswq198v425pzfxd9br8151nx4nvjv8ats6kt2cqowb5vbilnwx2vn3c48xihqc56grvjmadfwff3hmfdfar7ks1lm4kwq2imdjax450tlfgc2ru1amege5zd299t2sil5wjwkxvzce55mqp37amwsneo4bei89mm5o6o57cfxytuifp7u1vnw99odygzrvae1dsgtmqd3araryu550ofe8yr588aixamba28fz8hxwikvv2pspcezx1nykjetj4d44xs6fh02ix0ih2o6479sne51zfs3eo1fjrdbp35zevhdpd1b3wlstadvewi8i8aku31q7aylo5h31rtuxgaon3c5frzz7efwvs0qyeeiqycymyk7oprjp4jmpsr87t6bbsba2hfxari6v9a5m4fux2j1gnflfty29idudmvq == \p\w\3\0\z\s\u\x\8\b\7\n\v\u\l\k\v\m\t\2\2\i\m\e\x\s\s\d\v\g\1\8\n\h\1\2\5\g\n\o\9\y\a\w\t\s\e\a\0\p\2\l\u\i\f\a\9\l\b\1\7\b\z\f\b\4\s\2\0\a\d\2\4\y\f\y\x\z\k\q\b\4\w\g\4\8\k\a\b\j\t\0\r\b\i\l\c\5\6\z\o\2\l\b\a\a\0\5\x\s\s\9\q\g\h\g\a\i\t\f\x\r\x\r\4\p\c\h\n\x\j\x\j\b\a\p\s\g\q\p\k\k\c\r\3\t\k\o\8\s\x\x\7\b\y\6\k\a\q\j\b\w\u\p\c\v\6\p\e\o\j\e\i\s\2\u\d\h\i\k\c\j\3\g\p\1\3\m\t\z\y\o\h\k\e\i\6\o\l\4\r\k\x\m\h\j\0\2\n\a\j\d\y\2\6\a\0\f\3\k\u\s\t\u\a\w\h\p\u\j\t\o\j\g\k\7\u\k\e\y\c\k\r\2\m\7\p\l\c\n\9\0\1\5\e\c\f\j\7\7\m\7\6\j\m\c\0\7\j\z\5\v\r\j\f\0\w\4\c\5\w\f\i\r\q\2\2\7\h\e\u\x\c\a\o\g\b\0\v\6\3\7\d\w\m\v\k\7\g\r\8\r\g\j\4\v\t\f\3\0\l\p\5\x\p\b\6\n\t\e\0\j\r\t\e\0\v\a\e\f\h\y\l\2\y\d\0\v\j\l\8\y\q\b\s\b\f\3\6\g\e\a\o\f\0\p\x\w\s\l\z\r\i\4\9\3\v\f\q\d\0\7\1\j\7\p\m\v\h\h\8\f\v\m\q\v\z\2\z\i\e\x\6\0\b\g\w\d\q\l\z\x\b\g\6\n\i\7\v\7\8\3\q\9\p\6\n\s\g\8\v\8\d\q\w\z\l\p\4\1\5\e\u\0\2\2\i\m\c\g\c\v\x\u\n\a\u\d\0\5\6\a\u\9\w\k\1\n\h\a\w\s\l\3\g\0\i\l\r\0\m\j\0\p\z\x\p\l\y\o\x\r\d\j\e\b\v\9\g\e\w\l\n\j\f\f\s\t\0\4\m\u\v\y\p\g\5\4\h\1\p\u\q\u\c\d\q\8\i\o\f\o\d\4\0\p\a\j\7\k\e\g\0\p\p\m\n\p\s\i\y\c\q\9\6\o\4\0\b\6\a\b\2\s\1\p\a\4\z\c\5\y\n\b\e\f\j\q\w\w\a\0\m\8\1\b\9\y\v\1\5\q\z\l\x\i\1\y\n\s\w\q\1\9\8\v\4\2\5\p\z\f\x\d\9\b\r\8\1\5\1\n\x\4\n\v\j\v\8\a\t\s\6\k\t\2\c\q\o\w\b\5\v\b\i\l\n\w\x\2\v\n\3\c\4\8\x\i\h\q\c\5\6\g\r\v\j\m\a\d\f\w\f\f\3\h\m\f\d\f\a\r\7\k\s\1\l\m\4\k\w\q\2\i\m\d\j\a\x\4\5\0\t\l\f\g\c\2\r\u\1\a\m\e\g\e\5\z\d\2\9\9\t\2\s\i\l\5\w\j\w\k\x\v\z\c\e\5\5\m\q\p\3\7\a\m\w\s\n\e\o\4\b\e\i\8\9\m\m\5\o\6\o\5\7\c\f\x\y\t\u\i\f\p\7\u\1\v\n\w\9\9\o\d\y\g\z\r\v\a\e\1\d\s\g\t\m\q\d\3\a\r\a\r\y\u\5\5\0\o\f\e\8\y\r\5\8\8\a\i\x\a\m\b\a\2\8\f\z\8\h\x\w\i\k\v\v\2\p\s\p\c\e\z\x\1\n\y\k\j\e\t\j\4\d\4\4\x\s\6\f\h\0\2\i\x\0\i\h\2\o\6\4\7\9\s\n\e\5\1\z\f\s\3\e\o\1\f\j\r\d\b\p\3\5\z\e\v\h\d\p\d\1\b\3\w\l\s\t\a\d\v\e\w\i\8\i\8\a\k\u\3\1\q\7\a\y\l\o\5\h\3\1\r\t\u\x\g\a\o\n\3\c\5\f\r\z\z\7\e\f\w\v\s\0\q\y\e\e\i\q\y\c\y\m\y\k\7\o\p\r\j\p\4\j\m\p\s\r\8\7\t\6\b\b\s\b\a\2\h\f\x\a\r\i\6\v\9\a\5\m\4\f\u\x\2\j\1\g\n\f\l\f\t\y\2\9\i\d\u\d\m\v\q ]] 00:07:56.658 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:56.917 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:56.917 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:56.917 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:56.917 00:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.917 [2024-12-17 00:23:42.891919] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:56.917 [2024-12-17 00:23:42.891996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73225 ] 00:07:56.917 { 00:07:56.917 "subsystems": [ 00:07:56.917 { 00:07:56.917 "subsystem": "bdev", 00:07:56.917 "config": [ 00:07:56.917 { 00:07:56.917 "params": { 00:07:56.917 "block_size": 512, 00:07:56.917 "num_blocks": 1048576, 00:07:56.917 "name": "malloc0" 00:07:56.917 }, 00:07:56.917 "method": "bdev_malloc_create" 00:07:56.917 }, 00:07:56.917 { 00:07:56.917 "params": { 00:07:56.917 "filename": "/dev/zram1", 00:07:56.917 "name": "uring0" 00:07:56.917 }, 00:07:56.917 "method": "bdev_uring_create" 00:07:56.917 }, 00:07:56.917 { 00:07:56.917 "method": "bdev_wait_for_examine" 00:07:56.917 } 00:07:56.917 ] 00:07:56.917 } 00:07:56.917 ] 00:07:56.917 } 00:07:57.176 [2024-12-17 00:23:43.022731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.176 [2024-12-17 00:23:43.055250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.176 [2024-12-17 00:23:43.083293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.553  [2024-12-17T00:23:45.492Z] Copying: 167/512 [MB] (167 MBps) [2024-12-17T00:23:46.429Z] Copying: 334/512 [MB] (166 MBps) [2024-12-17T00:23:46.429Z] Copying: 502/512 [MB] (168 MBps) [2024-12-17T00:23:46.688Z] Copying: 512/512 [MB] (average 167 MBps) 00:08:00.685 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:00.685 00:23:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:00.685 [2024-12-17 00:23:46.539696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:00.685 [2024-12-17 00:23:46.539802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73275 ] 00:08:00.685 { 00:08:00.685 "subsystems": [ 00:08:00.685 { 00:08:00.685 "subsystem": "bdev", 00:08:00.685 "config": [ 00:08:00.685 { 00:08:00.685 "params": { 00:08:00.685 "block_size": 512, 00:08:00.685 "num_blocks": 1048576, 00:08:00.685 "name": "malloc0" 00:08:00.685 }, 00:08:00.685 "method": "bdev_malloc_create" 00:08:00.685 }, 00:08:00.685 { 00:08:00.685 "params": { 00:08:00.685 "filename": "/dev/zram1", 00:08:00.685 "name": "uring0" 00:08:00.685 }, 00:08:00.685 "method": "bdev_uring_create" 00:08:00.685 }, 00:08:00.685 { 00:08:00.685 "params": { 00:08:00.685 "name": "uring0" 00:08:00.685 }, 00:08:00.685 "method": "bdev_uring_delete" 00:08:00.685 }, 00:08:00.685 { 00:08:00.685 "method": "bdev_wait_for_examine" 00:08:00.685 } 00:08:00.685 ] 00:08:00.685 } 00:08:00.685 ] 00:08:00.685 } 00:08:00.685 [2024-12-17 00:23:46.674866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.944 [2024-12-17 00:23:46.708400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.944 [2024-12-17 00:23:46.736203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.944  [2024-12-17T00:23:47.206Z] Copying: 0/0 [B] (average 0 Bps) 00:08:01.203 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.203 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:01.203 [2024-12-17 00:23:47.175619] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:01.203 [2024-12-17 00:23:47.176192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73302 ] 00:08:01.203 { 00:08:01.203 "subsystems": [ 00:08:01.203 { 00:08:01.203 "subsystem": "bdev", 00:08:01.203 "config": [ 00:08:01.203 { 00:08:01.203 "params": { 00:08:01.203 "block_size": 512, 00:08:01.203 "num_blocks": 1048576, 00:08:01.203 "name": "malloc0" 00:08:01.203 }, 00:08:01.203 "method": "bdev_malloc_create" 00:08:01.203 }, 00:08:01.203 { 00:08:01.203 "params": { 00:08:01.203 "filename": "/dev/zram1", 00:08:01.203 "name": "uring0" 00:08:01.203 }, 00:08:01.203 "method": "bdev_uring_create" 00:08:01.203 }, 00:08:01.203 { 00:08:01.203 "params": { 00:08:01.203 "name": "uring0" 00:08:01.203 }, 00:08:01.203 "method": "bdev_uring_delete" 00:08:01.203 }, 00:08:01.203 { 00:08:01.203 "method": "bdev_wait_for_examine" 00:08:01.203 } 00:08:01.203 ] 00:08:01.203 } 00:08:01.203 ] 00:08:01.203 } 00:08:01.462 [2024-12-17 00:23:47.314074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.462 [2024-12-17 00:23:47.350764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.462 [2024-12-17 00:23:47.381654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.721 [2024-12-17 00:23:47.500952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:01.721 [2024-12-17 00:23:47.501013] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:01.721 [2024-12-17 00:23:47.501024] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:01.721 [2024-12-17 00:23:47.501033] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.721 [2024-12-17 00:23:47.661149] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:01.980 00:23:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:02.255 00:08:02.255 real 0m12.571s 00:08:02.255 user 0m8.442s 00:08:02.255 sys 0m10.771s 00:08:02.255 00:23:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.255 00:23:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:02.255 ************************************ 00:08:02.255 END TEST dd_uring_copy 00:08:02.255 ************************************ 00:08:02.255 00:08:02.255 real 0m12.809s 00:08:02.255 user 0m8.566s 00:08:02.255 sys 0m10.891s 00:08:02.255 00:23:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.255 00:23:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:02.255 ************************************ 00:08:02.255 END TEST spdk_dd_uring 00:08:02.255 ************************************ 00:08:02.255 00:23:48 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:02.255 00:23:48 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.255 00:23:48 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.255 00:23:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.255 ************************************ 00:08:02.255 START TEST spdk_dd_sparse 00:08:02.255 ************************************ 00:08:02.255 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:02.255 * Looking for test storage... 00:08:02.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.255 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.255 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:02.255 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:02.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.539 --rc genhtml_branch_coverage=1 00:08:02.539 --rc genhtml_function_coverage=1 00:08:02.539 --rc genhtml_legend=1 00:08:02.539 --rc geninfo_all_blocks=1 00:08:02.539 --rc geninfo_unexecuted_blocks=1 00:08:02.539 00:08:02.539 ' 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:02.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.539 --rc genhtml_branch_coverage=1 00:08:02.539 --rc genhtml_function_coverage=1 00:08:02.539 --rc genhtml_legend=1 00:08:02.539 --rc geninfo_all_blocks=1 00:08:02.539 --rc geninfo_unexecuted_blocks=1 00:08:02.539 00:08:02.539 ' 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:02.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.539 --rc genhtml_branch_coverage=1 00:08:02.539 --rc genhtml_function_coverage=1 00:08:02.539 --rc genhtml_legend=1 00:08:02.539 --rc geninfo_all_blocks=1 00:08:02.539 --rc geninfo_unexecuted_blocks=1 00:08:02.539 00:08:02.539 ' 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:02.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.539 --rc genhtml_branch_coverage=1 00:08:02.539 --rc genhtml_function_coverage=1 00:08:02.539 --rc genhtml_legend=1 00:08:02.539 --rc geninfo_all_blocks=1 00:08:02.539 --rc geninfo_unexecuted_blocks=1 00:08:02.539 00:08:02.539 ' 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.539 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:02.540 1+0 records in 00:08:02.540 1+0 records out 00:08:02.540 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00598321 s, 701 MB/s 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:02.540 1+0 records in 00:08:02.540 1+0 records out 00:08:02.540 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00587345 s, 714 MB/s 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:02.540 1+0 records in 00:08:02.540 1+0 records out 00:08:02.540 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00598977 s, 700 MB/s 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:02.540 ************************************ 00:08:02.540 START TEST dd_sparse_file_to_file 00:08:02.540 ************************************ 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:02.540 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:02.540 [2024-12-17 00:23:48.381785] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:02.540 [2024-12-17 00:23:48.381893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73396 ] 00:08:02.540 { 00:08:02.540 "subsystems": [ 00:08:02.540 { 00:08:02.540 "subsystem": "bdev", 00:08:02.540 "config": [ 00:08:02.540 { 00:08:02.540 "params": { 00:08:02.540 "block_size": 4096, 00:08:02.540 "filename": "dd_sparse_aio_disk", 00:08:02.540 "name": "dd_aio" 00:08:02.540 }, 00:08:02.540 "method": "bdev_aio_create" 00:08:02.540 }, 00:08:02.540 { 00:08:02.540 "params": { 00:08:02.540 "lvs_name": "dd_lvstore", 00:08:02.540 "bdev_name": "dd_aio" 00:08:02.540 }, 00:08:02.540 "method": "bdev_lvol_create_lvstore" 00:08:02.540 }, 00:08:02.540 { 00:08:02.540 "method": "bdev_wait_for_examine" 00:08:02.540 } 00:08:02.540 ] 00:08:02.540 } 00:08:02.540 ] 00:08:02.540 } 00:08:02.540 [2024-12-17 00:23:48.509706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.799 [2024-12-17 00:23:48.547073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.799 [2024-12-17 00:23:48.575937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.799  [2024-12-17T00:23:49.062Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:03.059 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:03.059 00:08:03.059 real 0m0.516s 00:08:03.059 user 0m0.305s 00:08:03.059 sys 0m0.235s 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:03.059 ************************************ 00:08:03.059 END TEST dd_sparse_file_to_file 00:08:03.059 ************************************ 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:03.059 ************************************ 00:08:03.059 START TEST dd_sparse_file_to_bdev 00:08:03.059 ************************************ 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:03.059 00:23:48 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.059 [2024-12-17 00:23:48.947012] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:03.059 [2024-12-17 00:23:48.947099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73439 ] 00:08:03.059 { 00:08:03.059 "subsystems": [ 00:08:03.059 { 00:08:03.059 "subsystem": "bdev", 00:08:03.059 "config": [ 00:08:03.059 { 00:08:03.059 "params": { 00:08:03.059 "block_size": 4096, 00:08:03.059 "filename": "dd_sparse_aio_disk", 00:08:03.059 "name": "dd_aio" 00:08:03.059 }, 00:08:03.059 "method": "bdev_aio_create" 00:08:03.059 }, 00:08:03.059 { 00:08:03.059 "params": { 00:08:03.059 "lvs_name": "dd_lvstore", 00:08:03.059 "lvol_name": "dd_lvol", 00:08:03.059 "size_in_mib": 36, 00:08:03.059 "thin_provision": true 00:08:03.059 }, 00:08:03.059 "method": "bdev_lvol_create" 00:08:03.059 }, 00:08:03.059 { 00:08:03.059 "method": "bdev_wait_for_examine" 00:08:03.059 } 00:08:03.059 ] 00:08:03.059 } 00:08:03.059 ] 00:08:03.059 } 00:08:03.318 [2024-12-17 00:23:49.077335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.318 [2024-12-17 00:23:49.115790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.318 [2024-12-17 00:23:49.144352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.318  [2024-12-17T00:23:49.580Z] Copying: 12/36 [MB] (average 545 MBps) 00:08:03.577 00:08:03.577 00:08:03.577 real 0m0.463s 00:08:03.577 user 0m0.289s 00:08:03.577 sys 0m0.233s 00:08:03.577 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.577 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.577 ************************************ 00:08:03.577 END TEST dd_sparse_file_to_bdev 00:08:03.577 ************************************ 00:08:03.577 00:23:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:03.577 00:23:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.577 00:23:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.577 00:23:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:03.577 ************************************ 00:08:03.578 START TEST dd_sparse_bdev_to_file 00:08:03.578 ************************************ 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:03.578 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:03.578 { 00:08:03.578 "subsystems": [ 00:08:03.578 { 00:08:03.578 "subsystem": "bdev", 00:08:03.578 "config": [ 00:08:03.578 { 00:08:03.578 "params": { 00:08:03.578 "block_size": 4096, 00:08:03.578 "filename": "dd_sparse_aio_disk", 00:08:03.578 "name": "dd_aio" 00:08:03.578 }, 00:08:03.578 "method": "bdev_aio_create" 00:08:03.578 }, 00:08:03.578 { 00:08:03.578 "method": "bdev_wait_for_examine" 00:08:03.578 } 00:08:03.578 ] 00:08:03.578 } 00:08:03.578 ] 00:08:03.578 } 00:08:03.578 [2024-12-17 00:23:49.472420] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:03.578 [2024-12-17 00:23:49.472514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73471 ] 00:08:03.837 [2024-12-17 00:23:49.608863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.837 [2024-12-17 00:23:49.641641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.837 [2024-12-17 00:23:49.669568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.837  [2024-12-17T00:23:50.098Z] Copying: 12/36 [MB] (average 1090 MBps) 00:08:04.095 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:04.095 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:04.095 00:08:04.096 real 0m0.495s 00:08:04.096 user 0m0.292s 00:08:04.096 sys 0m0.240s 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:04.096 ************************************ 00:08:04.096 END TEST dd_sparse_bdev_to_file 00:08:04.096 ************************************ 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:04.096 00:08:04.096 real 0m1.861s 00:08:04.096 user 0m1.064s 00:08:04.096 sys 0m0.905s 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.096 ************************************ 00:08:04.096 END TEST spdk_dd_sparse 00:08:04.096 ************************************ 00:08:04.096 00:23:49 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:04.096 00:23:50 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:04.096 00:23:50 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.096 00:23:50 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.096 00:23:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:04.096 ************************************ 00:08:04.096 START TEST spdk_dd_negative 00:08:04.096 ************************************ 00:08:04.096 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:04.356 * Looking for test storage... 00:08:04.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.356 --rc genhtml_branch_coverage=1 00:08:04.356 --rc genhtml_function_coverage=1 00:08:04.356 --rc genhtml_legend=1 00:08:04.356 --rc geninfo_all_blocks=1 00:08:04.356 --rc geninfo_unexecuted_blocks=1 00:08:04.356 00:08:04.356 ' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.356 --rc genhtml_branch_coverage=1 00:08:04.356 --rc genhtml_function_coverage=1 00:08:04.356 --rc genhtml_legend=1 00:08:04.356 --rc geninfo_all_blocks=1 00:08:04.356 --rc geninfo_unexecuted_blocks=1 00:08:04.356 00:08:04.356 ' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.356 --rc genhtml_branch_coverage=1 00:08:04.356 --rc genhtml_function_coverage=1 00:08:04.356 --rc genhtml_legend=1 00:08:04.356 --rc geninfo_all_blocks=1 00:08:04.356 --rc geninfo_unexecuted_blocks=1 00:08:04.356 00:08:04.356 ' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:04.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.356 --rc genhtml_branch_coverage=1 00:08:04.356 --rc genhtml_function_coverage=1 00:08:04.356 --rc genhtml_legend=1 00:08:04.356 --rc geninfo_all_blocks=1 00:08:04.356 --rc geninfo_unexecuted_blocks=1 00:08:04.356 00:08:04.356 ' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.356 ************************************ 00:08:04.356 START TEST dd_invalid_arguments 00:08:04.356 ************************************ 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:04.356 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:04.357 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:04.357 00:08:04.357 CPU options: 00:08:04.357 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:04.357 (like [0,1,10]) 00:08:04.357 --lcores lcore to CPU mapping list. The list is in the format: 00:08:04.357 [<,lcores[@CPUs]>...] 00:08:04.357 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:04.357 Within the group, '-' is used for range separator, 00:08:04.357 ',' is used for single number separator. 00:08:04.357 '( )' can be omitted for single element group, 00:08:04.357 '@' can be omitted if cpus and lcores have the same value 00:08:04.357 --disable-cpumask-locks Disable CPU core lock files. 00:08:04.357 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:04.357 pollers in the app support interrupt mode) 00:08:04.357 -p, --main-core main (primary) core for DPDK 00:08:04.357 00:08:04.357 Configuration options: 00:08:04.357 -c, --config, --json JSON config file 00:08:04.357 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:04.357 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:04.357 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:04.357 --rpcs-allowed comma-separated list of permitted RPCS 00:08:04.357 --json-ignore-init-errors don't exit on invalid config entry 00:08:04.357 00:08:04.357 Memory options: 00:08:04.357 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:04.357 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:04.357 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:04.357 -R, --huge-unlink unlink huge files after initialization 00:08:04.357 -n, --mem-channels number of memory channels used for DPDK 00:08:04.357 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:04.357 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:04.357 --no-huge run without using hugepages 00:08:04.357 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:04.357 -i, --shm-id shared memory ID (optional) 00:08:04.357 -g, --single-file-segments force creating just one hugetlbfs file 00:08:04.357 00:08:04.357 PCI options: 00:08:04.357 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:04.357 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:04.357 -u, --no-pci disable PCI access 00:08:04.357 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:04.357 00:08:04.357 Log options: 00:08:04.357 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:04.357 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:04.357 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:04.357 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:04.357 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:04.357 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:04.357 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:04.357 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:04.357 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:04.357 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:04.357 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:04.357 --silence-noticelog disable notice level logging to stderr 00:08:04.357 00:08:04.357 Trace options: 00:08:04.357 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:04.357 setting 0 to disable trace (default 32768) 00:08:04.357 Tracepoints vary in size and can use more than one trace entry. 00:08:04.357 -e, --tpoint-group [:] 00:08:04.357 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:04.357 [2024-12-17 00:23:50.285040] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:04.357 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:04.357 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:04.357 bdev_raid, all). 00:08:04.357 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:04.357 a tracepoint group. First tpoint inside a group can be enabled by 00:08:04.357 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:04.357 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:04.357 in /include/spdk_internal/trace_defs.h 00:08:04.357 00:08:04.357 Other options: 00:08:04.357 -h, --help show this usage 00:08:04.357 -v, --version print SPDK version 00:08:04.357 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:04.357 --env-context Opaque context for use of the env implementation 00:08:04.357 00:08:04.357 Application specific: 00:08:04.357 [--------- DD Options ---------] 00:08:04.357 --if Input file. Must specify either --if or --ib. 00:08:04.357 --ib Input bdev. Must specifier either --if or --ib 00:08:04.357 --of Output file. Must specify either --of or --ob. 00:08:04.357 --ob Output bdev. Must specify either --of or --ob. 00:08:04.357 --iflag Input file flags. 00:08:04.357 --oflag Output file flags. 00:08:04.357 --bs I/O unit size (default: 4096) 00:08:04.357 --qd Queue depth (default: 2) 00:08:04.357 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:04.357 --skip Skip this many I/O units at start of input. (default: 0) 00:08:04.357 --seek Skip this many I/O units at start of output. (default: 0) 00:08:04.357 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:04.357 --sparse Enable hole skipping in input target 00:08:04.357 Available iflag and oflag values: 00:08:04.357 append - append mode 00:08:04.357 direct - use direct I/O for data 00:08:04.357 directory - fail unless a directory 00:08:04.357 dsync - use synchronized I/O for data 00:08:04.357 noatime - do not update access time 00:08:04.357 noctty - do not assign controlling terminal from file 00:08:04.357 nofollow - do not follow symlinks 00:08:04.357 nonblock - use non-blocking I/O 00:08:04.357 sync - use synchronized I/O for data and metadata 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.357 00:08:04.357 real 0m0.075s 00:08:04.357 user 0m0.051s 00:08:04.357 sys 0m0.023s 00:08:04.357 ************************************ 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:04.357 END TEST dd_invalid_arguments 00:08:04.357 ************************************ 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.357 ************************************ 00:08:04.357 START TEST dd_double_input 00:08:04.357 ************************************ 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.357 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:04.617 [2024-12-17 00:23:50.409795] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.617 00:08:04.617 real 0m0.074s 00:08:04.617 user 0m0.046s 00:08:04.617 sys 0m0.027s 00:08:04.617 ************************************ 00:08:04.617 END TEST dd_double_input 00:08:04.617 ************************************ 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.617 ************************************ 00:08:04.617 START TEST dd_double_output 00:08:04.617 ************************************ 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:04.617 [2024-12-17 00:23:50.533160] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.617 00:08:04.617 real 0m0.074s 00:08:04.617 user 0m0.042s 00:08:04.617 sys 0m0.031s 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.617 ************************************ 00:08:04.617 END TEST dd_double_output 00:08:04.617 ************************************ 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.617 ************************************ 00:08:04.617 START TEST dd_no_input 00:08:04.617 ************************************ 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.617 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.618 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:04.877 [2024-12-17 00:23:50.658139] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.877 00:08:04.877 real 0m0.076s 00:08:04.877 user 0m0.049s 00:08:04.877 sys 0m0.026s 00:08:04.877 ************************************ 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:04.877 END TEST dd_no_input 00:08:04.877 ************************************ 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.877 ************************************ 00:08:04.877 START TEST dd_no_output 00:08:04.877 ************************************ 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.877 [2024-12-17 00:23:50.768892] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.877 00:08:04.877 real 0m0.055s 00:08:04.877 user 0m0.032s 00:08:04.877 sys 0m0.021s 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.877 ************************************ 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:04.877 END TEST dd_no_output 00:08:04.877 ************************************ 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:04.877 ************************************ 00:08:04.877 START TEST dd_wrong_blocksize 00:08:04.877 ************************************ 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.877 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:04.877 [2024-12-17 00:23:50.871107] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.137 00:08:05.137 real 0m0.056s 00:08:05.137 user 0m0.038s 00:08:05.137 sys 0m0.017s 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:05.137 ************************************ 00:08:05.137 END TEST dd_wrong_blocksize 00:08:05.137 ************************************ 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.137 ************************************ 00:08:05.137 START TEST dd_smaller_blocksize 00:08:05.137 ************************************ 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.137 00:23:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:05.137 [2024-12-17 00:23:50.992259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:05.137 [2024-12-17 00:23:50.992378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73697 ] 00:08:05.137 [2024-12-17 00:23:51.131779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.396 [2024-12-17 00:23:51.174638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.396 [2024-12-17 00:23:51.209533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.396 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:05.396 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:05.396 [2024-12-17 00:23:51.228552] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:05.396 [2024-12-17 00:23:51.228583] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.396 [2024-12-17 00:23:51.296126] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.396 00:08:05.396 real 0m0.434s 00:08:05.396 user 0m0.214s 00:08:05.396 sys 0m0.114s 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.396 00:23:51 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:05.396 ************************************ 00:08:05.396 END TEST dd_smaller_blocksize 00:08:05.396 ************************************ 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.656 ************************************ 00:08:05.656 START TEST dd_invalid_count 00:08:05.656 ************************************ 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:05.656 [2024-12-17 00:23:51.474695] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.656 00:08:05.656 real 0m0.073s 00:08:05.656 user 0m0.043s 00:08:05.656 sys 0m0.029s 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:05.656 ************************************ 00:08:05.656 END TEST dd_invalid_count 00:08:05.656 ************************************ 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.656 ************************************ 00:08:05.656 START TEST dd_invalid_oflag 00:08:05.656 ************************************ 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:05.656 [2024-12-17 00:23:51.597362] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.656 00:08:05.656 real 0m0.072s 00:08:05.656 user 0m0.051s 00:08:05.656 sys 0m0.019s 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.656 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:05.656 ************************************ 00:08:05.657 END TEST dd_invalid_oflag 00:08:05.657 ************************************ 00:08:05.657 00:23:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:05.657 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.657 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.657 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.916 ************************************ 00:08:05.916 START TEST dd_invalid_iflag 00:08:05.916 ************************************ 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:05.916 [2024-12-17 00:23:51.719839] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.916 00:08:05.916 real 0m0.070s 00:08:05.916 user 0m0.043s 00:08:05.916 sys 0m0.025s 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.916 ************************************ 00:08:05.916 END TEST dd_invalid_iflag 00:08:05.916 ************************************ 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:05.916 ************************************ 00:08:05.916 START TEST dd_unknown_flag 00:08:05.916 ************************************ 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.916 00:23:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:05.916 [2024-12-17 00:23:51.844076] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:05.916 [2024-12-17 00:23:51.844381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73789 ] 00:08:06.176 [2024-12-17 00:23:51.984191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.176 [2024-12-17 00:23:52.026334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.176 [2024-12-17 00:23:52.062773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.176 [2024-12-17 00:23:52.082345] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:06.176 [2024-12-17 00:23:52.082413] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.176 [2024-12-17 00:23:52.082477] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:06.176 [2024-12-17 00:23:52.082494] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.176 [2024-12-17 00:23:52.083048] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:06.176 [2024-12-17 00:23:52.083085] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.176 [2024-12-17 00:23:52.083147] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:06.176 [2024-12-17 00:23:52.083160] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:06.176 [2024-12-17 00:23:52.151773] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.435 00:08:06.435 real 0m0.441s 00:08:06.435 user 0m0.223s 00:08:06.435 sys 0m0.125s 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:06.435 ************************************ 00:08:06.435 END TEST dd_unknown_flag 00:08:06.435 ************************************ 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:06.435 ************************************ 00:08:06.435 START TEST dd_invalid_json 00:08:06.435 ************************************ 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.435 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.436 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.436 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.436 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.436 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.436 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.436 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.436 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:06.436 [2024-12-17 00:23:52.333114] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:06.436 [2024-12-17 00:23:52.333365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73818 ] 00:08:06.695 [2024-12-17 00:23:52.471887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.695 [2024-12-17 00:23:52.514265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.695 [2024-12-17 00:23:52.514382] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:06.695 [2024-12-17 00:23:52.514400] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:06.695 [2024-12-17 00:23:52.514411] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.695 [2024-12-17 00:23:52.514481] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:06.695 ************************************ 00:08:06.695 END TEST dd_invalid_json 00:08:06.695 ************************************ 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.695 00:08:06.695 real 0m0.304s 00:08:06.695 user 0m0.139s 00:08:06.695 sys 0m0.063s 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:06.695 ************************************ 00:08:06.695 START TEST dd_invalid_seek 00:08:06.695 ************************************ 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.695 00:23:52 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:06.954 [2024-12-17 00:23:52.697086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:06.954 [2024-12-17 00:23:52.697359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73842 ] 00:08:06.954 { 00:08:06.954 "subsystems": [ 00:08:06.954 { 00:08:06.954 "subsystem": "bdev", 00:08:06.954 "config": [ 00:08:06.954 { 00:08:06.954 "params": { 00:08:06.954 "block_size": 512, 00:08:06.954 "num_blocks": 512, 00:08:06.954 "name": "malloc0" 00:08:06.954 }, 00:08:06.954 "method": "bdev_malloc_create" 00:08:06.954 }, 00:08:06.954 { 00:08:06.954 "params": { 00:08:06.954 "block_size": 512, 00:08:06.954 "num_blocks": 512, 00:08:06.954 "name": "malloc1" 00:08:06.954 }, 00:08:06.954 "method": "bdev_malloc_create" 00:08:06.954 }, 00:08:06.954 { 00:08:06.954 "method": "bdev_wait_for_examine" 00:08:06.954 } 00:08:06.954 ] 00:08:06.954 } 00:08:06.954 ] 00:08:06.954 } 00:08:06.954 [2024-12-17 00:23:52.837127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.954 [2024-12-17 00:23:52.879113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.954 [2024-12-17 00:23:52.912983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.214 [2024-12-17 00:23:52.958190] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:07.214 [2024-12-17 00:23:52.958269] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.214 [2024-12-17 00:23:53.028094] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.214 00:08:07.214 real 0m0.462s 00:08:07.214 user 0m0.286s 00:08:07.214 sys 0m0.136s 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.214 ************************************ 00:08:07.214 END TEST dd_invalid_seek 00:08:07.214 ************************************ 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.214 ************************************ 00:08:07.214 START TEST dd_invalid_skip 00:08:07.214 ************************************ 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.214 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:07.214 { 00:08:07.214 "subsystems": [ 00:08:07.214 { 00:08:07.214 "subsystem": "bdev", 00:08:07.214 "config": [ 00:08:07.214 { 00:08:07.214 "params": { 00:08:07.214 "block_size": 512, 00:08:07.214 "num_blocks": 512, 00:08:07.214 "name": "malloc0" 00:08:07.214 }, 00:08:07.214 "method": "bdev_malloc_create" 00:08:07.214 }, 00:08:07.214 { 00:08:07.214 "params": { 00:08:07.214 "block_size": 512, 00:08:07.214 "num_blocks": 512, 00:08:07.214 "name": "malloc1" 00:08:07.214 }, 00:08:07.214 "method": "bdev_malloc_create" 00:08:07.214 }, 00:08:07.214 { 00:08:07.214 "method": "bdev_wait_for_examine" 00:08:07.214 } 00:08:07.214 ] 00:08:07.214 } 00:08:07.214 ] 00:08:07.214 } 00:08:07.214 [2024-12-17 00:23:53.214025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:07.214 [2024-12-17 00:23:53.214118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73881 ] 00:08:07.474 [2024-12-17 00:23:53.348787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.474 [2024-12-17 00:23:53.382723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.474 [2024-12-17 00:23:53.411567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.474 [2024-12-17 00:23:53.452482] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:07.474 [2024-12-17 00:23:53.452862] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.733 [2024-12-17 00:23:53.515965] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.733 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:07.733 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.733 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:07.733 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:07.733 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:07.733 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.734 00:08:07.734 real 0m0.423s 00:08:07.734 user 0m0.266s 00:08:07.734 sys 0m0.109s 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.734 ************************************ 00:08:07.734 END TEST dd_invalid_skip 00:08:07.734 ************************************ 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.734 ************************************ 00:08:07.734 START TEST dd_invalid_input_count 00:08:07.734 ************************************ 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.734 00:23:53 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:07.734 { 00:08:07.734 "subsystems": [ 00:08:07.734 { 00:08:07.734 "subsystem": "bdev", 00:08:07.734 "config": [ 00:08:07.734 { 00:08:07.734 "params": { 00:08:07.734 "block_size": 512, 00:08:07.734 "num_blocks": 512, 00:08:07.734 "name": "malloc0" 00:08:07.734 }, 00:08:07.734 "method": "bdev_malloc_create" 00:08:07.734 }, 00:08:07.734 { 00:08:07.734 "params": { 00:08:07.734 "block_size": 512, 00:08:07.734 "num_blocks": 512, 00:08:07.734 "name": "malloc1" 00:08:07.734 }, 00:08:07.734 "method": "bdev_malloc_create" 00:08:07.734 }, 00:08:07.734 { 00:08:07.734 "method": "bdev_wait_for_examine" 00:08:07.734 } 00:08:07.734 ] 00:08:07.734 } 00:08:07.734 ] 00:08:07.734 } 00:08:07.734 [2024-12-17 00:23:53.687342] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:07.734 [2024-12-17 00:23:53.687447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73909 ] 00:08:07.994 [2024-12-17 00:23:53.820386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.994 [2024-12-17 00:23:53.853180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.994 [2024-12-17 00:23:53.881211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.994 [2024-12-17 00:23:53.922024] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:07.994 [2024-12-17 00:23:53.922095] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.994 [2024-12-17 00:23:53.980450] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.253 00:08:08.253 real 0m0.421s 00:08:08.253 user 0m0.253s 00:08:08.253 sys 0m0.121s 00:08:08.253 ************************************ 00:08:08.253 END TEST dd_invalid_input_count 00:08:08.253 ************************************ 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 ************************************ 00:08:08.253 START TEST dd_invalid_output_count 00:08:08.253 ************************************ 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.253 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.254 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.254 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.254 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.254 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.254 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.254 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:08.254 { 00:08:08.254 "subsystems": [ 00:08:08.254 { 00:08:08.254 "subsystem": "bdev", 00:08:08.254 "config": [ 00:08:08.254 { 00:08:08.254 "params": { 00:08:08.254 "block_size": 512, 00:08:08.254 "num_blocks": 512, 00:08:08.254 "name": "malloc0" 00:08:08.254 }, 00:08:08.254 "method": "bdev_malloc_create" 00:08:08.254 }, 00:08:08.254 { 00:08:08.254 "method": "bdev_wait_for_examine" 00:08:08.254 } 00:08:08.254 ] 00:08:08.254 } 00:08:08.254 ] 00:08:08.254 } 00:08:08.254 [2024-12-17 00:23:54.157849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:08.254 [2024-12-17 00:23:54.158592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73948 ] 00:08:08.513 [2024-12-17 00:23:54.290515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.513 [2024-12-17 00:23:54.323788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.513 [2024-12-17 00:23:54.355625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.513 [2024-12-17 00:23:54.389610] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:08.513 [2024-12-17 00:23:54.389680] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.513 [2024-12-17 00:23:54.448115] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.772 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:08.773 ************************************ 00:08:08.773 END TEST dd_invalid_output_count 00:08:08.773 ************************************ 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.773 00:08:08.773 real 0m0.420s 00:08:08.773 user 0m0.270s 00:08:08.773 sys 0m0.104s 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.773 ************************************ 00:08:08.773 START TEST dd_bs_not_multiple 00:08:08.773 ************************************ 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.773 00:23:54 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:08.773 [2024-12-17 00:23:54.631416] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:08.773 [2024-12-17 00:23:54.631514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73974 ] 00:08:08.773 { 00:08:08.773 "subsystems": [ 00:08:08.773 { 00:08:08.773 "subsystem": "bdev", 00:08:08.773 "config": [ 00:08:08.773 { 00:08:08.773 "params": { 00:08:08.773 "block_size": 512, 00:08:08.773 "num_blocks": 512, 00:08:08.773 "name": "malloc0" 00:08:08.773 }, 00:08:08.773 "method": "bdev_malloc_create" 00:08:08.773 }, 00:08:08.773 { 00:08:08.773 "params": { 00:08:08.773 "block_size": 512, 00:08:08.773 "num_blocks": 512, 00:08:08.773 "name": "malloc1" 00:08:08.773 }, 00:08:08.773 "method": "bdev_malloc_create" 00:08:08.773 }, 00:08:08.773 { 00:08:08.773 "method": "bdev_wait_for_examine" 00:08:08.773 } 00:08:08.773 ] 00:08:08.773 } 00:08:08.773 ] 00:08:08.773 } 00:08:08.773 [2024-12-17 00:23:54.768163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.043 [2024-12-17 00:23:54.802250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.043 [2024-12-17 00:23:54.833631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.043 [2024-12-17 00:23:54.876133] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:09.043 [2024-12-17 00:23:54.876185] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.043 [2024-12-17 00:23:54.935797] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.043 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:09.043 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.043 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:09.043 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:09.043 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:09.044 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.044 ************************************ 00:08:09.044 END TEST dd_bs_not_multiple 00:08:09.044 ************************************ 00:08:09.044 00:08:09.044 real 0m0.442s 00:08:09.044 user 0m0.296s 00:08:09.044 sys 0m0.110s 00:08:09.044 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.044 00:23:55 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:09.304 ************************************ 00:08:09.304 END TEST spdk_dd_negative 00:08:09.304 00:08:09.304 real 0m5.035s 00:08:09.304 user 0m2.724s 00:08:09.304 sys 0m1.712s 00:08:09.304 00:23:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.304 00:23:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:09.304 ************************************ 00:08:09.304 00:08:09.304 real 1m3.786s 00:08:09.304 user 0m40.380s 00:08:09.304 sys 0m26.679s 00:08:09.304 00:23:55 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.304 ************************************ 00:08:09.304 END TEST spdk_dd 00:08:09.304 ************************************ 00:08:09.304 00:23:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:09.304 00:23:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:09.304 00:23:55 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:09.304 00:23:55 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:09.304 00:23:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.304 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:09.304 00:23:55 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:09.304 00:23:55 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:09.304 00:23:55 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:09.304 00:23:55 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:09.304 00:23:55 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:09.304 00:23:55 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:09.304 00:23:55 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:09.304 00:23:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.304 00:23:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.304 00:23:55 -- common/autotest_common.sh@10 -- # set +x 00:08:09.304 ************************************ 00:08:09.304 START TEST nvmf_tcp 00:08:09.304 ************************************ 00:08:09.304 00:23:55 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:09.304 * Looking for test storage... 00:08:09.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:09.304 00:23:55 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:09.304 00:23:55 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:09.304 00:23:55 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:09.563 00:23:55 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:09.563 00:23:55 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.564 00:23:55 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.564 00:23:55 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.564 00:23:55 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.564 --rc genhtml_branch_coverage=1 00:08:09.564 --rc genhtml_function_coverage=1 00:08:09.564 --rc genhtml_legend=1 00:08:09.564 --rc geninfo_all_blocks=1 00:08:09.564 --rc geninfo_unexecuted_blocks=1 00:08:09.564 00:08:09.564 ' 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.564 --rc genhtml_branch_coverage=1 00:08:09.564 --rc genhtml_function_coverage=1 00:08:09.564 --rc genhtml_legend=1 00:08:09.564 --rc geninfo_all_blocks=1 00:08:09.564 --rc geninfo_unexecuted_blocks=1 00:08:09.564 00:08:09.564 ' 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.564 --rc genhtml_branch_coverage=1 00:08:09.564 --rc genhtml_function_coverage=1 00:08:09.564 --rc genhtml_legend=1 00:08:09.564 --rc geninfo_all_blocks=1 00:08:09.564 --rc geninfo_unexecuted_blocks=1 00:08:09.564 00:08:09.564 ' 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:09.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.564 --rc genhtml_branch_coverage=1 00:08:09.564 --rc genhtml_function_coverage=1 00:08:09.564 --rc genhtml_legend=1 00:08:09.564 --rc geninfo_all_blocks=1 00:08:09.564 --rc geninfo_unexecuted_blocks=1 00:08:09.564 00:08:09.564 ' 00:08:09.564 00:23:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:09.564 00:23:55 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:09.564 00:23:55 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.564 00:23:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.564 ************************************ 00:08:09.564 START TEST nvmf_target_core 00:08:09.564 ************************************ 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:09.564 * Looking for test storage... 00:08:09.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.564 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.824 --rc genhtml_branch_coverage=1 00:08:09.824 --rc genhtml_function_coverage=1 00:08:09.824 --rc genhtml_legend=1 00:08:09.824 --rc geninfo_all_blocks=1 00:08:09.824 --rc geninfo_unexecuted_blocks=1 00:08:09.824 00:08:09.824 ' 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.824 --rc genhtml_branch_coverage=1 00:08:09.824 --rc genhtml_function_coverage=1 00:08:09.824 --rc genhtml_legend=1 00:08:09.824 --rc geninfo_all_blocks=1 00:08:09.824 --rc geninfo_unexecuted_blocks=1 00:08:09.824 00:08:09.824 ' 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.824 --rc genhtml_branch_coverage=1 00:08:09.824 --rc genhtml_function_coverage=1 00:08:09.824 --rc genhtml_legend=1 00:08:09.824 --rc geninfo_all_blocks=1 00:08:09.824 --rc geninfo_unexecuted_blocks=1 00:08:09.824 00:08:09.824 ' 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.824 --rc genhtml_branch_coverage=1 00:08:09.824 --rc genhtml_function_coverage=1 00:08:09.824 --rc genhtml_legend=1 00:08:09.824 --rc geninfo_all_blocks=1 00:08:09.824 --rc geninfo_unexecuted_blocks=1 00:08:09.824 00:08:09.824 ' 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.824 00:23:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.825 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.825 ************************************ 00:08:09.825 START TEST nvmf_host_management 00:08:09.825 ************************************ 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:09.825 * Looking for test storage... 00:08:09.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.825 --rc genhtml_branch_coverage=1 00:08:09.825 --rc genhtml_function_coverage=1 00:08:09.825 --rc genhtml_legend=1 00:08:09.825 --rc geninfo_all_blocks=1 00:08:09.825 --rc geninfo_unexecuted_blocks=1 00:08:09.825 00:08:09.825 ' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.825 --rc genhtml_branch_coverage=1 00:08:09.825 --rc genhtml_function_coverage=1 00:08:09.825 --rc genhtml_legend=1 00:08:09.825 --rc geninfo_all_blocks=1 00:08:09.825 --rc geninfo_unexecuted_blocks=1 00:08:09.825 00:08:09.825 ' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.825 --rc genhtml_branch_coverage=1 00:08:09.825 --rc genhtml_function_coverage=1 00:08:09.825 --rc genhtml_legend=1 00:08:09.825 --rc geninfo_all_blocks=1 00:08:09.825 --rc geninfo_unexecuted_blocks=1 00:08:09.825 00:08:09.825 ' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:09.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.825 --rc genhtml_branch_coverage=1 00:08:09.825 --rc genhtml_function_coverage=1 00:08:09.825 --rc genhtml_legend=1 00:08:09.825 --rc geninfo_all_blocks=1 00:08:09.825 --rc geninfo_unexecuted_blocks=1 00:08:09.825 00:08:09.825 ' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.825 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.826 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.826 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.826 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:09.826 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.086 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:10.086 Cannot find device "nvmf_init_br" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:10.086 Cannot find device "nvmf_init_br2" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:10.086 Cannot find device "nvmf_tgt_br" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.086 Cannot find device "nvmf_tgt_br2" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:10.086 Cannot find device "nvmf_init_br" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:10.086 Cannot find device "nvmf_init_br2" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:10.086 Cannot find device "nvmf_tgt_br" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:10.086 Cannot find device "nvmf_tgt_br2" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:10.086 Cannot find device "nvmf_br" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:10.086 Cannot find device "nvmf_init_if" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:10.086 Cannot find device "nvmf_init_if2" 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:10.086 00:23:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:10.086 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:10.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:10.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:08:10.346 00:08:10.346 --- 10.0.0.3 ping statistics --- 00:08:10.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.346 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:10.346 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:10.346 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:08:10.346 00:08:10.346 --- 10.0.0.4 ping statistics --- 00:08:10.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.346 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:10.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:10.346 00:08:10.346 --- 10.0.0.1 ping statistics --- 00:08:10.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.346 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:10.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:10.346 00:08:10.346 --- 10.0.0.2 ping statistics --- 00:08:10.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.346 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=74319 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 74319 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74319 ']' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.346 00:23:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.607 [2024-12-17 00:23:56.397755] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.607 [2024-12-17 00:23:56.397849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.607 [2024-12-17 00:23:56.536026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.607 [2024-12-17 00:23:56.582923] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.607 [2024-12-17 00:23:56.583386] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.607 [2024-12-17 00:23:56.583691] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.607 [2024-12-17 00:23:56.583978] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.607 [2024-12-17 00:23:56.584285] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.607 [2024-12-17 00:23:56.587951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.607 [2024-12-17 00:23:56.588139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.607 [2024-12-17 00:23:56.588337] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.607 [2024-12-17 00:23:56.588341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:10.867 [2024-12-17 00:23:56.624259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.434 [2024-12-17 00:23:57.418256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:11.434 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.693 Malloc0 00:08:11.693 [2024-12-17 00:23:57.473380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=74373 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 74373 /var/tmp/bdevperf.sock 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74373 ']' 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:11.693 { 00:08:11.693 "params": { 00:08:11.693 "name": "Nvme$subsystem", 00:08:11.693 "trtype": "$TEST_TRANSPORT", 00:08:11.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.693 "adrfam": "ipv4", 00:08:11.693 "trsvcid": "$NVMF_PORT", 00:08:11.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.693 "hdgst": ${hdgst:-false}, 00:08:11.693 "ddgst": ${ddgst:-false} 00:08:11.693 }, 00:08:11.693 "method": "bdev_nvme_attach_controller" 00:08:11.693 } 00:08:11.693 EOF 00:08:11.693 )") 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:11.693 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:11.693 "params": { 00:08:11.693 "name": "Nvme0", 00:08:11.693 "trtype": "tcp", 00:08:11.693 "traddr": "10.0.0.3", 00:08:11.693 "adrfam": "ipv4", 00:08:11.693 "trsvcid": "4420", 00:08:11.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:11.693 "hdgst": false, 00:08:11.693 "ddgst": false 00:08:11.693 }, 00:08:11.693 "method": "bdev_nvme_attach_controller" 00:08:11.694 }' 00:08:11.694 [2024-12-17 00:23:57.576417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:11.694 [2024-12-17 00:23:57.576514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74373 ] 00:08:11.952 [2024-12-17 00:23:57.714692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.952 [2024-12-17 00:23:57.749931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.952 [2024-12-17 00:23:57.787735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.952 Running I/O for 10 seconds... 00:08:11.952 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.952 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:11.952 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:11.952 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.952 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.952 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.952 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.211 00:23:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.211 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:12.211 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:12.211 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.471 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.471 [2024-12-17 00:23:58.351843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.471 [2024-12-17 00:23:58.351891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.471 [2024-12-17 00:23:58.351931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.471 [2024-12-17 00:23:58.351942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.471 [2024-12-17 00:23:58.351953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.471 [2024-12-17 00:23:58.351961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.351972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.351980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.351990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.351999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.472 [2024-12-17 00:23:58.352830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.472 [2024-12-17 00:23:58.352839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.352983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.352991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:12.473 [2024-12-17 00:23:58.353235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.353244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106f370 is same with the state(6) to be set 00:08:12.473 [2024-12-17 00:23:58.353290] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x106f370 was disconnected and freed. reset controller. 00:08:12.473 [2024-12-17 00:23:58.353820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.473 [2024-12-17 00:23:58.354032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.354159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.473 [2024-12-17 00:23:58.354351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.473 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 00:23:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:12.473 [2024-12-17 00:23:58.354578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.473 [2024-12-17 00:23:58.354801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.354931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:12.473 [2024-12-17 00:23:58.355051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:12.473 [2024-12-17 00:23:58.355146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57860 is same with the state(6) to be set 00:08:12.473 [2024-12-17 00:23:58.356328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:12.473 task offset: 90112 on job bdev=Nvme0n1 fails 00:08:12.473 00:08:12.473 Latency(us) 00:08:12.473 [2024-12-17T00:23:58.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.473 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:12.473 Job: Nvme0n1 ended in about 0.47 seconds with error 00:08:12.473 Verification LBA range: start 0x0 length 0x400 00:08:12.473 Nvme0n1 : 0.47 1505.19 94.07 136.84 0.00 37819.41 3783.21 35985.22 00:08:12.473 [2024-12-17T00:23:58.476Z] =================================================================================================================== 00:08:12.473 [2024-12-17T00:23:58.476Z] Total : 1505.19 94.07 136.84 0.00 37819.41 3783.21 35985.22 00:08:12.473 [2024-12-17 00:23:58.358496] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.473 [2024-12-17 00:23:58.358531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe57860 (9): Bad file descriptor 00:08:12.473 [2024-12-17 00:23:58.365578] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 74373 00:08:13.408 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (74373) - No such process 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:13.408 { 00:08:13.408 "params": { 00:08:13.408 "name": "Nvme$subsystem", 00:08:13.408 "trtype": "$TEST_TRANSPORT", 00:08:13.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.408 "adrfam": "ipv4", 00:08:13.408 "trsvcid": "$NVMF_PORT", 00:08:13.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.408 "hdgst": ${hdgst:-false}, 00:08:13.408 "ddgst": ${ddgst:-false} 00:08:13.408 }, 00:08:13.408 "method": "bdev_nvme_attach_controller" 00:08:13.408 } 00:08:13.408 EOF 00:08:13.408 )") 00:08:13.408 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:13.409 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:13.409 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:13.409 00:23:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:13.409 "params": { 00:08:13.409 "name": "Nvme0", 00:08:13.409 "trtype": "tcp", 00:08:13.409 "traddr": "10.0.0.3", 00:08:13.409 "adrfam": "ipv4", 00:08:13.409 "trsvcid": "4420", 00:08:13.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:13.409 "hdgst": false, 00:08:13.409 "ddgst": false 00:08:13.409 }, 00:08:13.409 "method": "bdev_nvme_attach_controller" 00:08:13.409 }' 00:08:13.667 [2024-12-17 00:23:59.418052] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:13.667 [2024-12-17 00:23:59.418147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74413 ] 00:08:13.667 [2024-12-17 00:23:59.557673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.667 [2024-12-17 00:23:59.594703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.667 [2024-12-17 00:23:59.634621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.926 Running I/O for 1 seconds... 00:08:14.860 1600.00 IOPS, 100.00 MiB/s 00:08:14.860 Latency(us) 00:08:14.860 [2024-12-17T00:24:00.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.860 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.860 Verification LBA range: start 0x0 length 0x400 00:08:14.860 Nvme0n1 : 1.04 1600.57 100.04 0.00 0.00 39257.00 3693.85 34317.03 00:08:14.860 [2024-12-17T00:24:00.863Z] =================================================================================================================== 00:08:14.860 [2024-12-17T00:24:00.863Z] Total : 1600.57 100.04 0.00 0.00 39257.00 3693.85 34317.03 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.118 00:24:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.118 rmmod nvme_tcp 00:08:15.118 rmmod nvme_fabrics 00:08:15.118 rmmod nvme_keyring 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 74319 ']' 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 74319 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 74319 ']' 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 74319 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74319 00:08:15.118 killing process with pid 74319 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:15.118 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:15.119 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74319' 00:08:15.119 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 74319 00:08:15.119 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 74319 00:08:15.377 [2024-12-17 00:24:01.198185] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:15.377 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.378 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:15.378 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:15.378 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:15.378 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:15.378 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:15.378 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:15.378 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:15.636 00:08:15.636 real 0m5.844s 00:08:15.636 user 0m20.921s 00:08:15.636 sys 0m1.416s 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.636 ************************************ 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 END TEST nvmf_host_management 00:08:15.636 ************************************ 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.636 ************************************ 00:08:15.636 START TEST nvmf_lvol 00:08:15.636 ************************************ 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:15.636 * Looking for test storage... 00:08:15.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:15.636 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.896 --rc genhtml_branch_coverage=1 00:08:15.896 --rc genhtml_function_coverage=1 00:08:15.896 --rc genhtml_legend=1 00:08:15.896 --rc geninfo_all_blocks=1 00:08:15.896 --rc geninfo_unexecuted_blocks=1 00:08:15.896 00:08:15.896 ' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.896 --rc genhtml_branch_coverage=1 00:08:15.896 --rc genhtml_function_coverage=1 00:08:15.896 --rc genhtml_legend=1 00:08:15.896 --rc geninfo_all_blocks=1 00:08:15.896 --rc geninfo_unexecuted_blocks=1 00:08:15.896 00:08:15.896 ' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.896 --rc genhtml_branch_coverage=1 00:08:15.896 --rc genhtml_function_coverage=1 00:08:15.896 --rc genhtml_legend=1 00:08:15.896 --rc geninfo_all_blocks=1 00:08:15.896 --rc geninfo_unexecuted_blocks=1 00:08:15.896 00:08:15.896 ' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:15.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.896 --rc genhtml_branch_coverage=1 00:08:15.896 --rc genhtml_function_coverage=1 00:08:15.896 --rc genhtml_legend=1 00:08:15.896 --rc geninfo_all_blocks=1 00:08:15.896 --rc geninfo_unexecuted_blocks=1 00:08:15.896 00:08:15.896 ' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.896 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:15.897 Cannot find device "nvmf_init_br" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:15.897 Cannot find device "nvmf_init_br2" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:15.897 Cannot find device "nvmf_tgt_br" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.897 Cannot find device "nvmf_tgt_br2" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:15.897 Cannot find device "nvmf_init_br" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:15.897 Cannot find device "nvmf_init_br2" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:15.897 Cannot find device "nvmf_tgt_br" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:15.897 Cannot find device "nvmf_tgt_br2" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:15.897 Cannot find device "nvmf_br" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:15.897 Cannot find device "nvmf_init_if" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:15.897 Cannot find device "nvmf_init_if2" 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:15.897 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:16.156 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:16.156 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:16.157 00:24:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:16.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:16.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:08:16.157 00:08:16.157 --- 10.0.0.3 ping statistics --- 00:08:16.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.157 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:16.157 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:16.157 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:08:16.157 00:08:16.157 --- 10.0.0.4 ping statistics --- 00:08:16.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.157 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:16.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:16.157 00:08:16.157 --- 10.0.0.1 ping statistics --- 00:08:16.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.157 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:16.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:16.157 00:08:16.157 --- 10.0.0.2 ping statistics --- 00:08:16.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.157 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=74676 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 74676 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 74676 ']' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.157 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.416 [2024-12-17 00:24:02.204516] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:16.416 [2024-12-17 00:24:02.204600] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.416 [2024-12-17 00:24:02.345603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.416 [2024-12-17 00:24:02.388171] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.416 [2024-12-17 00:24:02.388256] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.416 [2024-12-17 00:24:02.388271] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.416 [2024-12-17 00:24:02.388281] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.416 [2024-12-17 00:24:02.388290] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.416 [2024-12-17 00:24:02.388442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.416 [2024-12-17 00:24:02.389442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.416 [2024-12-17 00:24:02.389459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.675 [2024-12-17 00:24:02.424429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.675 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.675 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:16.675 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:16.675 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.675 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.675 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.675 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.933 [2024-12-17 00:24:02.803589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.933 00:24:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.192 00:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:17.192 00:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.451 00:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:17.451 00:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:18.018 00:24:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:18.018 00:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8137a5ad-48a8-4a87-a229-efbe28958131 00:08:18.018 00:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8137a5ad-48a8-4a87-a229-efbe28958131 lvol 20 00:08:18.277 00:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f924fece-bd62-4b4c-9ee0-c57e3a2e349f 00:08:18.277 00:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.535 00:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f924fece-bd62-4b4c-9ee0-c57e3a2e349f 00:08:18.794 00:24:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:19.053 [2024-12-17 00:24:05.005443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:19.053 00:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:19.314 00:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:19.314 00:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=74744 00:08:19.314 00:24:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:20.693 00:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f924fece-bd62-4b4c-9ee0-c57e3a2e349f MY_SNAPSHOT 00:08:20.693 00:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=71dcd83f-6246-444e-9aa2-6f8b827e5a79 00:08:20.693 00:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f924fece-bd62-4b4c-9ee0-c57e3a2e349f 30 00:08:20.952 00:24:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 71dcd83f-6246-444e-9aa2-6f8b827e5a79 MY_CLONE 00:08:21.211 00:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=48cbeba0-cf45-4b06-a212-a06d090101e9 00:08:21.211 00:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 48cbeba0-cf45-4b06-a212-a06d090101e9 00:08:21.814 00:24:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 74744 00:08:29.928 Initializing NVMe Controllers 00:08:29.928 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:29.928 Controller IO queue size 128, less than required. 00:08:29.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.928 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:29.928 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:29.928 Initialization complete. Launching workers. 00:08:29.928 ======================================================== 00:08:29.928 Latency(us) 00:08:29.928 Device Information : IOPS MiB/s Average min max 00:08:29.928 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10377.70 40.54 12336.85 1620.05 47140.10 00:08:29.928 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10417.00 40.69 12291.35 553.79 80370.72 00:08:29.928 ======================================================== 00:08:29.928 Total : 20794.70 81.23 12314.06 553.79 80370.72 00:08:29.928 00:08:29.928 00:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.928 00:24:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f924fece-bd62-4b4c-9ee0-c57e3a2e349f 00:08:30.185 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8137a5ad-48a8-4a87-a229-efbe28958131 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.443 rmmod nvme_tcp 00:08:30.443 rmmod nvme_fabrics 00:08:30.443 rmmod nvme_keyring 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.443 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 74676 ']' 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 74676 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 74676 ']' 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 74676 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74676 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.444 killing process with pid 74676 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74676' 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 74676 00:08:30.444 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 74676 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:30.702 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:30.960 00:08:30.960 real 0m15.333s 00:08:30.960 user 1m3.659s 00:08:30.960 sys 0m4.041s 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.960 ************************************ 00:08:30.960 END TEST nvmf_lvol 00:08:30.960 ************************************ 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.960 00:24:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.960 ************************************ 00:08:30.961 START TEST nvmf_lvs_grow 00:08:30.961 ************************************ 00:08:30.961 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:31.220 * Looking for test storage... 00:08:31.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.220 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.220 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.220 00:24:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:31.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.220 --rc genhtml_branch_coverage=1 00:08:31.220 --rc genhtml_function_coverage=1 00:08:31.220 --rc genhtml_legend=1 00:08:31.220 --rc geninfo_all_blocks=1 00:08:31.220 --rc geninfo_unexecuted_blocks=1 00:08:31.220 00:08:31.220 ' 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:31.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.220 --rc genhtml_branch_coverage=1 00:08:31.220 --rc genhtml_function_coverage=1 00:08:31.220 --rc genhtml_legend=1 00:08:31.220 --rc geninfo_all_blocks=1 00:08:31.220 --rc geninfo_unexecuted_blocks=1 00:08:31.220 00:08:31.220 ' 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:31.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.220 --rc genhtml_branch_coverage=1 00:08:31.220 --rc genhtml_function_coverage=1 00:08:31.220 --rc genhtml_legend=1 00:08:31.220 --rc geninfo_all_blocks=1 00:08:31.220 --rc geninfo_unexecuted_blocks=1 00:08:31.220 00:08:31.220 ' 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:31.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.220 --rc genhtml_branch_coverage=1 00:08:31.220 --rc genhtml_function_coverage=1 00:08:31.220 --rc genhtml_legend=1 00:08:31.220 --rc geninfo_all_blocks=1 00:08:31.220 --rc geninfo_unexecuted_blocks=1 00:08:31.220 00:08:31.220 ' 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.220 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:31.221 Cannot find device "nvmf_init_br" 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:31.221 Cannot find device "nvmf_init_br2" 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:31.221 Cannot find device "nvmf_tgt_br" 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.221 Cannot find device "nvmf_tgt_br2" 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:31.221 Cannot find device "nvmf_init_br" 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:31.221 Cannot find device "nvmf_init_br2" 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:31.221 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:31.479 Cannot find device "nvmf_tgt_br" 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:31.479 Cannot find device "nvmf_tgt_br2" 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:31.479 Cannot find device "nvmf_br" 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:31.479 Cannot find device "nvmf_init_if" 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:31.479 Cannot find device "nvmf_init_if2" 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:31.479 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:31.480 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:31.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:31.738 00:08:31.738 --- 10.0.0.3 ping statistics --- 00:08:31.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.738 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:31.738 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:31.738 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:08:31.738 00:08:31.738 --- 10.0.0.4 ping statistics --- 00:08:31.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.738 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:31.738 00:08:31.738 --- 10.0.0.1 ping statistics --- 00:08:31.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.738 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:31.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:08:31.738 00:08:31.738 --- 10.0.0.2 ping statistics --- 00:08:31.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.738 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:31.738 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=75128 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 75128 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 75128 ']' 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.739 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.739 [2024-12-17 00:24:17.638381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:31.739 [2024-12-17 00:24:17.638484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.998 [2024-12-17 00:24:17.777228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.998 [2024-12-17 00:24:17.819097] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.998 [2024-12-17 00:24:17.819165] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.998 [2024-12-17 00:24:17.819179] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.998 [2024-12-17 00:24:17.819190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.998 [2024-12-17 00:24:17.819198] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.998 [2024-12-17 00:24:17.819229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.998 [2024-12-17 00:24:17.852564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.998 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.998 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:31.998 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:31.998 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.998 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.998 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.998 00:24:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.256 [2024-12-17 00:24:18.223205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.256 ************************************ 00:08:32.256 START TEST lvs_grow_clean 00:08:32.256 ************************************ 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.256 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.515 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.515 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.774 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.774 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:33.033 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:33.033 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:33.033 00:24:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.291 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.291 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.291 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b120b6d7-5dc5-4f19-af5f-7da737333adf lvol 150 00:08:33.550 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f75e5add-f74e-47a0-ac41-ddc11fb85d9f 00:08:33.550 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.550 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:33.809 [2024-12-17 00:24:19.723303] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:33.809 [2024-12-17 00:24:19.723464] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:33.809 true 00:08:33.809 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:33.809 00:24:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:34.069 00:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:34.069 00:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.327 00:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f75e5add-f74e-47a0-ac41-ddc11fb85d9f 00:08:34.586 00:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:34.846 [2024-12-17 00:24:20.828094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.846 00:24:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75203 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75203 /var/tmp/bdevperf.sock 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 75203 ']' 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.105 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:35.363 [2024-12-17 00:24:21.113628] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:35.363 [2024-12-17 00:24:21.113733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75203 ] 00:08:35.363 [2024-12-17 00:24:21.241426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.363 [2024-12-17 00:24:21.276136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.363 [2024-12-17 00:24:21.303949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.622 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.622 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:35.622 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.881 Nvme0n1 00:08:35.881 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:36.139 [ 00:08:36.139 { 00:08:36.139 "name": "Nvme0n1", 00:08:36.139 "aliases": [ 00:08:36.139 "f75e5add-f74e-47a0-ac41-ddc11fb85d9f" 00:08:36.139 ], 00:08:36.139 "product_name": "NVMe disk", 00:08:36.139 "block_size": 4096, 00:08:36.139 "num_blocks": 38912, 00:08:36.139 "uuid": "f75e5add-f74e-47a0-ac41-ddc11fb85d9f", 00:08:36.139 "numa_id": -1, 00:08:36.139 "assigned_rate_limits": { 00:08:36.139 "rw_ios_per_sec": 0, 00:08:36.139 "rw_mbytes_per_sec": 0, 00:08:36.139 "r_mbytes_per_sec": 0, 00:08:36.139 "w_mbytes_per_sec": 0 00:08:36.139 }, 00:08:36.139 "claimed": false, 00:08:36.139 "zoned": false, 00:08:36.139 "supported_io_types": { 00:08:36.139 "read": true, 00:08:36.139 "write": true, 00:08:36.139 "unmap": true, 00:08:36.139 "flush": true, 00:08:36.139 "reset": true, 00:08:36.139 "nvme_admin": true, 00:08:36.139 "nvme_io": true, 00:08:36.139 "nvme_io_md": false, 00:08:36.139 "write_zeroes": true, 00:08:36.139 "zcopy": false, 00:08:36.139 "get_zone_info": false, 00:08:36.139 "zone_management": false, 00:08:36.139 "zone_append": false, 00:08:36.139 "compare": true, 00:08:36.139 "compare_and_write": true, 00:08:36.139 "abort": true, 00:08:36.139 "seek_hole": false, 00:08:36.139 "seek_data": false, 00:08:36.139 "copy": true, 00:08:36.139 "nvme_iov_md": false 00:08:36.140 }, 00:08:36.140 "memory_domains": [ 00:08:36.140 { 00:08:36.140 "dma_device_id": "system", 00:08:36.140 "dma_device_type": 1 00:08:36.140 } 00:08:36.140 ], 00:08:36.140 "driver_specific": { 00:08:36.140 "nvme": [ 00:08:36.140 { 00:08:36.140 "trid": { 00:08:36.140 "trtype": "TCP", 00:08:36.140 "adrfam": "IPv4", 00:08:36.140 "traddr": "10.0.0.3", 00:08:36.140 "trsvcid": "4420", 00:08:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:36.140 }, 00:08:36.140 "ctrlr_data": { 00:08:36.140 "cntlid": 1, 00:08:36.140 "vendor_id": "0x8086", 00:08:36.140 "model_number": "SPDK bdev Controller", 00:08:36.140 "serial_number": "SPDK0", 00:08:36.140 "firmware_revision": "24.09.1", 00:08:36.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.140 "oacs": { 00:08:36.140 "security": 0, 00:08:36.140 "format": 0, 00:08:36.140 "firmware": 0, 00:08:36.140 "ns_manage": 0 00:08:36.140 }, 00:08:36.140 "multi_ctrlr": true, 00:08:36.140 "ana_reporting": false 00:08:36.140 }, 00:08:36.140 "vs": { 00:08:36.140 "nvme_version": "1.3" 00:08:36.140 }, 00:08:36.140 "ns_data": { 00:08:36.140 "id": 1, 00:08:36.140 "can_share": true 00:08:36.140 } 00:08:36.140 } 00:08:36.140 ], 00:08:36.140 "mp_policy": "active_passive" 00:08:36.140 } 00:08:36.140 } 00:08:36.140 ] 00:08:36.140 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75219 00:08:36.140 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:36.140 00:24:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:36.140 Running I/O for 10 seconds... 00:08:37.516 Latency(us) 00:08:37.516 [2024-12-17T00:24:23.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.516 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:37.516 [2024-12-17T00:24:23.519Z] =================================================================================================================== 00:08:37.516 [2024-12-17T00:24:23.519Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:37.516 00:08:38.084 00:24:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:38.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.343 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:38.343 [2024-12-17T00:24:24.346Z] =================================================================================================================== 00:08:38.343 [2024-12-17T00:24:24.346Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:38.343 00:08:38.343 true 00:08:38.343 00:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:38.343 00:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:38.911 00:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:38.911 00:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:38.911 00:24:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 75219 00:08:39.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.183 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:39.183 [2024-12-17T00:24:25.186Z] =================================================================================================================== 00:08:39.183 [2024-12-17T00:24:25.186Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:39.183 00:08:40.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.132 Nvme0n1 : 4.00 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:08:40.132 [2024-12-17T00:24:26.135Z] =================================================================================================================== 00:08:40.132 [2024-12-17T00:24:26.135Z] Total : 6540.50 25.55 0.00 0.00 0.00 0.00 0.00 00:08:40.132 00:08:41.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.508 Nvme0n1 : 5.00 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:08:41.508 [2024-12-17T00:24:27.511Z] =================================================================================================================== 00:08:41.508 [2024-12-17T00:24:27.511Z] Total : 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:08:41.508 00:08:42.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.444 Nvme0n1 : 6.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:42.444 [2024-12-17T00:24:28.447Z] =================================================================================================================== 00:08:42.444 [2024-12-17T00:24:28.447Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:42.444 00:08:43.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.379 Nvme0n1 : 7.00 6440.71 25.16 0.00 0.00 0.00 0.00 0.00 00:08:43.379 [2024-12-17T00:24:29.382Z] =================================================================================================================== 00:08:43.379 [2024-12-17T00:24:29.382Z] Total : 6440.71 25.16 0.00 0.00 0.00 0.00 0.00 00:08:43.379 00:08:44.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.314 Nvme0n1 : 8.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:44.314 [2024-12-17T00:24:30.317Z] =================================================================================================================== 00:08:44.314 [2024-12-17T00:24:30.317Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:44.314 00:08:45.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.251 Nvme0n1 : 9.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:08:45.251 [2024-12-17T00:24:31.254Z] =================================================================================================================== 00:08:45.251 [2024-12-17T00:24:31.254Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:08:45.251 00:08:46.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.187 Nvme0n1 : 10.00 6375.40 24.90 0.00 0.00 0.00 0.00 0.00 00:08:46.187 [2024-12-17T00:24:32.190Z] =================================================================================================================== 00:08:46.187 [2024-12-17T00:24:32.190Z] Total : 6375.40 24.90 0.00 0.00 0.00 0.00 0.00 00:08:46.187 00:08:46.187 00:08:46.187 Latency(us) 00:08:46.187 [2024-12-17T00:24:32.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.187 Nvme0n1 : 10.01 6378.79 24.92 0.00 0.00 20062.13 16920.20 44564.48 00:08:46.187 [2024-12-17T00:24:32.190Z] =================================================================================================================== 00:08:46.187 [2024-12-17T00:24:32.190Z] Total : 6378.79 24.92 0.00 0.00 20062.13 16920.20 44564.48 00:08:46.187 { 00:08:46.187 "results": [ 00:08:46.187 { 00:08:46.187 "job": "Nvme0n1", 00:08:46.187 "core_mask": "0x2", 00:08:46.187 "workload": "randwrite", 00:08:46.187 "status": "finished", 00:08:46.187 "queue_depth": 128, 00:08:46.187 "io_size": 4096, 00:08:46.187 "runtime": 10.014754, 00:08:46.187 "iops": 6378.788735100233, 00:08:46.187 "mibps": 24.917143496485284, 00:08:46.187 "io_failed": 0, 00:08:46.187 "io_timeout": 0, 00:08:46.187 "avg_latency_us": 20062.128749882595, 00:08:46.187 "min_latency_us": 16920.203636363636, 00:08:46.187 "max_latency_us": 44564.48 00:08:46.187 } 00:08:46.187 ], 00:08:46.187 "core_count": 1 00:08:46.187 } 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75203 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 75203 ']' 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 75203 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75203 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:46.187 killing process with pid 75203 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75203' 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 75203 00:08:46.187 Received shutdown signal, test time was about 10.000000 seconds 00:08:46.187 00:08:46.187 Latency(us) 00:08:46.187 [2024-12-17T00:24:32.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.187 [2024-12-17T00:24:32.190Z] =================================================================================================================== 00:08:46.187 [2024-12-17T00:24:32.190Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:46.187 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 75203 00:08:46.446 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:46.705 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:46.964 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:46.964 00:24:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:47.223 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:47.223 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:47.223 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.481 [2024-12-17 00:24:33.448090] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:47.741 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:48.000 request: 00:08:48.000 { 00:08:48.000 "uuid": "b120b6d7-5dc5-4f19-af5f-7da737333adf", 00:08:48.000 "method": "bdev_lvol_get_lvstores", 00:08:48.000 "req_id": 1 00:08:48.000 } 00:08:48.000 Got JSON-RPC error response 00:08:48.000 response: 00:08:48.000 { 00:08:48.000 "code": -19, 00:08:48.000 "message": "No such device" 00:08:48.000 } 00:08:48.000 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:48.000 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.000 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.000 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.000 00:24:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.259 aio_bdev 00:08:48.259 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f75e5add-f74e-47a0-ac41-ddc11fb85d9f 00:08:48.259 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=f75e5add-f74e-47a0-ac41-ddc11fb85d9f 00:08:48.259 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:48.259 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:48.259 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:48.259 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:48.259 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.518 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f75e5add-f74e-47a0-ac41-ddc11fb85d9f -t 2000 00:08:48.777 [ 00:08:48.777 { 00:08:48.777 "name": "f75e5add-f74e-47a0-ac41-ddc11fb85d9f", 00:08:48.777 "aliases": [ 00:08:48.777 "lvs/lvol" 00:08:48.777 ], 00:08:48.777 "product_name": "Logical Volume", 00:08:48.777 "block_size": 4096, 00:08:48.777 "num_blocks": 38912, 00:08:48.777 "uuid": "f75e5add-f74e-47a0-ac41-ddc11fb85d9f", 00:08:48.777 "assigned_rate_limits": { 00:08:48.777 "rw_ios_per_sec": 0, 00:08:48.777 "rw_mbytes_per_sec": 0, 00:08:48.777 "r_mbytes_per_sec": 0, 00:08:48.777 "w_mbytes_per_sec": 0 00:08:48.777 }, 00:08:48.777 "claimed": false, 00:08:48.777 "zoned": false, 00:08:48.777 "supported_io_types": { 00:08:48.777 "read": true, 00:08:48.777 "write": true, 00:08:48.777 "unmap": true, 00:08:48.777 "flush": false, 00:08:48.777 "reset": true, 00:08:48.777 "nvme_admin": false, 00:08:48.777 "nvme_io": false, 00:08:48.777 "nvme_io_md": false, 00:08:48.777 "write_zeroes": true, 00:08:48.777 "zcopy": false, 00:08:48.777 "get_zone_info": false, 00:08:48.777 "zone_management": false, 00:08:48.777 "zone_append": false, 00:08:48.777 "compare": false, 00:08:48.777 "compare_and_write": false, 00:08:48.777 "abort": false, 00:08:48.777 "seek_hole": true, 00:08:48.777 "seek_data": true, 00:08:48.777 "copy": false, 00:08:48.777 "nvme_iov_md": false 00:08:48.777 }, 00:08:48.777 "driver_specific": { 00:08:48.777 "lvol": { 00:08:48.777 "lvol_store_uuid": "b120b6d7-5dc5-4f19-af5f-7da737333adf", 00:08:48.777 "base_bdev": "aio_bdev", 00:08:48.777 "thin_provision": false, 00:08:48.777 "num_allocated_clusters": 38, 00:08:48.777 "snapshot": false, 00:08:48.777 "clone": false, 00:08:48.777 "esnap_clone": false 00:08:48.777 } 00:08:48.777 } 00:08:48.777 } 00:08:48.777 ] 00:08:48.777 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:48.777 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:48.778 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:49.037 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:49.037 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:49.037 00:24:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:49.296 00:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:49.296 00:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f75e5add-f74e-47a0-ac41-ddc11fb85d9f 00:08:49.554 00:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b120b6d7-5dc5-4f19-af5f-7da737333adf 00:08:49.814 00:24:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.072 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.703 ************************************ 00:08:50.703 END TEST lvs_grow_clean 00:08:50.703 ************************************ 00:08:50.703 00:08:50.703 real 0m18.172s 00:08:50.703 user 0m17.010s 00:08:50.703 sys 0m2.516s 00:08:50.703 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.703 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:50.703 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:50.703 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.703 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.703 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.703 ************************************ 00:08:50.703 START TEST lvs_grow_dirty 00:08:50.703 ************************************ 00:08:50.703 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:50.704 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.962 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:50.963 00:24:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:51.223 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:08:51.223 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:08:51.223 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:51.482 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:51.482 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:51.482 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da lvol 150 00:08:51.741 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b42954cc-c849-4a52-bb9d-6b6efae88c00 00:08:51.741 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.741 00:24:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.309 [2024-12-17 00:24:38.017438] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.309 [2024-12-17 00:24:38.017731] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.309 true 00:08:52.309 00:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:08:52.309 00:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.309 00:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.309 00:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.877 00:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b42954cc-c849-4a52-bb9d-6b6efae88c00 00:08:52.877 00:24:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:53.136 [2024-12-17 00:24:39.070121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:53.136 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:53.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75476 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75476 /var/tmp/bdevperf.sock 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75476 ']' 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.396 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.396 [2024-12-17 00:24:39.384050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:53.396 [2024-12-17 00:24:39.384480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75476 ] 00:08:53.655 [2024-12-17 00:24:39.520742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.655 [2024-12-17 00:24:39.562723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.655 [2024-12-17 00:24:39.595896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.655 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.655 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:53.655 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.224 Nvme0n1 00:08:54.224 00:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:54.482 [ 00:08:54.482 { 00:08:54.482 "name": "Nvme0n1", 00:08:54.482 "aliases": [ 00:08:54.482 "b42954cc-c849-4a52-bb9d-6b6efae88c00" 00:08:54.483 ], 00:08:54.483 "product_name": "NVMe disk", 00:08:54.483 "block_size": 4096, 00:08:54.483 "num_blocks": 38912, 00:08:54.483 "uuid": "b42954cc-c849-4a52-bb9d-6b6efae88c00", 00:08:54.483 "numa_id": -1, 00:08:54.483 "assigned_rate_limits": { 00:08:54.483 "rw_ios_per_sec": 0, 00:08:54.483 "rw_mbytes_per_sec": 0, 00:08:54.483 "r_mbytes_per_sec": 0, 00:08:54.483 "w_mbytes_per_sec": 0 00:08:54.483 }, 00:08:54.483 "claimed": false, 00:08:54.483 "zoned": false, 00:08:54.483 "supported_io_types": { 00:08:54.483 "read": true, 00:08:54.483 "write": true, 00:08:54.483 "unmap": true, 00:08:54.483 "flush": true, 00:08:54.483 "reset": true, 00:08:54.483 "nvme_admin": true, 00:08:54.483 "nvme_io": true, 00:08:54.483 "nvme_io_md": false, 00:08:54.483 "write_zeroes": true, 00:08:54.483 "zcopy": false, 00:08:54.483 "get_zone_info": false, 00:08:54.483 "zone_management": false, 00:08:54.483 "zone_append": false, 00:08:54.483 "compare": true, 00:08:54.483 "compare_and_write": true, 00:08:54.483 "abort": true, 00:08:54.483 "seek_hole": false, 00:08:54.483 "seek_data": false, 00:08:54.483 "copy": true, 00:08:54.483 "nvme_iov_md": false 00:08:54.483 }, 00:08:54.483 "memory_domains": [ 00:08:54.483 { 00:08:54.483 "dma_device_id": "system", 00:08:54.483 "dma_device_type": 1 00:08:54.483 } 00:08:54.483 ], 00:08:54.483 "driver_specific": { 00:08:54.483 "nvme": [ 00:08:54.483 { 00:08:54.483 "trid": { 00:08:54.483 "trtype": "TCP", 00:08:54.483 "adrfam": "IPv4", 00:08:54.483 "traddr": "10.0.0.3", 00:08:54.483 "trsvcid": "4420", 00:08:54.483 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:54.483 }, 00:08:54.483 "ctrlr_data": { 00:08:54.483 "cntlid": 1, 00:08:54.483 "vendor_id": "0x8086", 00:08:54.483 "model_number": "SPDK bdev Controller", 00:08:54.483 "serial_number": "SPDK0", 00:08:54.483 "firmware_revision": "24.09.1", 00:08:54.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.483 "oacs": { 00:08:54.483 "security": 0, 00:08:54.483 "format": 0, 00:08:54.483 "firmware": 0, 00:08:54.483 "ns_manage": 0 00:08:54.483 }, 00:08:54.483 "multi_ctrlr": true, 00:08:54.483 "ana_reporting": false 00:08:54.483 }, 00:08:54.483 "vs": { 00:08:54.483 "nvme_version": "1.3" 00:08:54.483 }, 00:08:54.483 "ns_data": { 00:08:54.483 "id": 1, 00:08:54.483 "can_share": true 00:08:54.483 } 00:08:54.483 } 00:08:54.483 ], 00:08:54.483 "mp_policy": "active_passive" 00:08:54.483 } 00:08:54.483 } 00:08:54.483 ] 00:08:54.483 00:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75492 00:08:54.483 00:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.483 00:24:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.483 Running I/O for 10 seconds... 00:08:55.419 Latency(us) 00:08:55.419 [2024-12-17T00:24:41.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.419 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:55.419 [2024-12-17T00:24:41.422Z] =================================================================================================================== 00:08:55.419 [2024-12-17T00:24:41.422Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:55.419 00:08:56.405 00:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:08:56.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.405 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:56.405 [2024-12-17T00:24:42.408Z] =================================================================================================================== 00:08:56.405 [2024-12-17T00:24:42.408Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:56.405 00:08:56.664 true 00:08:56.664 00:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:08:56.664 00:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:56.923 00:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:56.923 00:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:56.923 00:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 75492 00:08:57.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.491 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:57.491 [2024-12-17T00:24:43.494Z] =================================================================================================================== 00:08:57.491 [2024-12-17T00:24:43.494Z] Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:57.491 00:08:58.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.428 Nvme0n1 : 4.00 6306.00 24.63 0.00 0.00 0.00 0.00 0.00 00:08:58.428 [2024-12-17T00:24:44.431Z] =================================================================================================================== 00:08:58.428 [2024-12-17T00:24:44.431Z] Total : 6306.00 24.63 0.00 0.00 0.00 0.00 0.00 00:08:58.428 00:08:59.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.804 Nvme0n1 : 5.00 6295.20 24.59 0.00 0.00 0.00 0.00 0.00 00:08:59.804 [2024-12-17T00:24:45.807Z] =================================================================================================================== 00:08:59.804 [2024-12-17T00:24:45.807Z] Total : 6295.20 24.59 0.00 0.00 0.00 0.00 0.00 00:08:59.804 00:09:00.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.741 Nvme0n1 : 6.00 6262.00 24.46 0.00 0.00 0.00 0.00 0.00 00:09:00.741 [2024-12-17T00:24:46.744Z] =================================================================================================================== 00:09:00.741 [2024-12-17T00:24:46.744Z] Total : 6262.00 24.46 0.00 0.00 0.00 0.00 0.00 00:09:00.741 00:09:01.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.677 Nvme0n1 : 7.00 6274.57 24.51 0.00 0.00 0.00 0.00 0.00 00:09:01.677 [2024-12-17T00:24:47.680Z] =================================================================================================================== 00:09:01.677 [2024-12-17T00:24:47.680Z] Total : 6274.57 24.51 0.00 0.00 0.00 0.00 0.00 00:09:01.677 00:09:02.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.614 Nvme0n1 : 8.00 6214.25 24.27 0.00 0.00 0.00 0.00 0.00 00:09:02.614 [2024-12-17T00:24:48.617Z] =================================================================================================================== 00:09:02.614 [2024-12-17T00:24:48.617Z] Total : 6214.25 24.27 0.00 0.00 0.00 0.00 0.00 00:09:02.614 00:09:03.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.557 Nvme0n1 : 9.00 6215.22 24.28 0.00 0.00 0.00 0.00 0.00 00:09:03.557 [2024-12-17T00:24:49.560Z] =================================================================================================================== 00:09:03.557 [2024-12-17T00:24:49.560Z] Total : 6215.22 24.28 0.00 0.00 0.00 0.00 0.00 00:09:03.557 00:09:04.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.495 Nvme0n1 : 10.00 6203.30 24.23 0.00 0.00 0.00 0.00 0.00 00:09:04.495 [2024-12-17T00:24:50.498Z] =================================================================================================================== 00:09:04.495 [2024-12-17T00:24:50.498Z] Total : 6203.30 24.23 0.00 0.00 0.00 0.00 0.00 00:09:04.495 00:09:04.495 00:09:04.495 Latency(us) 00:09:04.495 [2024-12-17T00:24:50.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.495 Nvme0n1 : 10.01 6207.68 24.25 0.00 0.00 20614.18 6851.49 112483.61 00:09:04.495 [2024-12-17T00:24:50.498Z] =================================================================================================================== 00:09:04.495 [2024-12-17T00:24:50.498Z] Total : 6207.68 24.25 0.00 0.00 20614.18 6851.49 112483.61 00:09:04.495 { 00:09:04.495 "results": [ 00:09:04.495 { 00:09:04.495 "job": "Nvme0n1", 00:09:04.495 "core_mask": "0x2", 00:09:04.495 "workload": "randwrite", 00:09:04.495 "status": "finished", 00:09:04.495 "queue_depth": 128, 00:09:04.495 "io_size": 4096, 00:09:04.495 "runtime": 10.013566, 00:09:04.495 "iops": 6207.678663125604, 00:09:04.495 "mibps": 24.24874477783439, 00:09:04.495 "io_failed": 0, 00:09:04.495 "io_timeout": 0, 00:09:04.495 "avg_latency_us": 20614.18149784065, 00:09:04.495 "min_latency_us": 6851.490909090909, 00:09:04.495 "max_latency_us": 112483.60727272727 00:09:04.495 } 00:09:04.495 ], 00:09:04.495 "core_count": 1 00:09:04.495 } 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75476 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 75476 ']' 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 75476 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75476 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:04.495 killing process with pid 75476 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75476' 00:09:04.495 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.495 00:09:04.495 Latency(us) 00:09:04.495 [2024-12-17T00:24:50.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.495 [2024-12-17T00:24:50.498Z] =================================================================================================================== 00:09:04.495 [2024-12-17T00:24:50.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 75476 00:09:04.495 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 75476 00:09:04.754 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.013 00:24:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.272 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:05.272 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 75128 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 75128 00:09:05.530 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 75128 Killed "${NVMF_APP[@]}" "$@" 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=75627 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 75627 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75627 ']' 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.530 00:24:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.530 [2024-12-17 00:24:51.485681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:05.530 [2024-12-17 00:24:51.485784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.788 [2024-12-17 00:24:51.613509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.788 [2024-12-17 00:24:51.646007] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.788 [2024-12-17 00:24:51.646076] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.788 [2024-12-17 00:24:51.646102] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.788 [2024-12-17 00:24:51.646109] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.788 [2024-12-17 00:24:51.646115] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.788 [2024-12-17 00:24:51.646139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.788 [2024-12-17 00:24:51.674632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.723 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.723 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:06.723 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:06.723 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.723 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.723 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.723 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.723 [2024-12-17 00:24:52.692434] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:06.723 [2024-12-17 00:24:52.693245] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:06.723 [2024-12-17 00:24:52.693539] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b42954cc-c849-4a52-bb9d-6b6efae88c00 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b42954cc-c849-4a52-bb9d-6b6efae88c00 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.983 00:24:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b42954cc-c849-4a52-bb9d-6b6efae88c00 -t 2000 00:09:07.242 [ 00:09:07.242 { 00:09:07.242 "name": "b42954cc-c849-4a52-bb9d-6b6efae88c00", 00:09:07.242 "aliases": [ 00:09:07.242 "lvs/lvol" 00:09:07.242 ], 00:09:07.242 "product_name": "Logical Volume", 00:09:07.242 "block_size": 4096, 00:09:07.242 "num_blocks": 38912, 00:09:07.242 "uuid": "b42954cc-c849-4a52-bb9d-6b6efae88c00", 00:09:07.242 "assigned_rate_limits": { 00:09:07.242 "rw_ios_per_sec": 0, 00:09:07.242 "rw_mbytes_per_sec": 0, 00:09:07.242 "r_mbytes_per_sec": 0, 00:09:07.242 "w_mbytes_per_sec": 0 00:09:07.242 }, 00:09:07.242 "claimed": false, 00:09:07.242 "zoned": false, 00:09:07.242 "supported_io_types": { 00:09:07.242 "read": true, 00:09:07.242 "write": true, 00:09:07.242 "unmap": true, 00:09:07.242 "flush": false, 00:09:07.242 "reset": true, 00:09:07.242 "nvme_admin": false, 00:09:07.242 "nvme_io": false, 00:09:07.242 "nvme_io_md": false, 00:09:07.242 "write_zeroes": true, 00:09:07.242 "zcopy": false, 00:09:07.242 "get_zone_info": false, 00:09:07.242 "zone_management": false, 00:09:07.242 "zone_append": false, 00:09:07.242 "compare": false, 00:09:07.242 "compare_and_write": false, 00:09:07.242 "abort": false, 00:09:07.242 "seek_hole": true, 00:09:07.242 "seek_data": true, 00:09:07.242 "copy": false, 00:09:07.242 "nvme_iov_md": false 00:09:07.242 }, 00:09:07.242 "driver_specific": { 00:09:07.242 "lvol": { 00:09:07.242 "lvol_store_uuid": "5dcb7382-9ad8-4a3b-b089-016209cfc4da", 00:09:07.242 "base_bdev": "aio_bdev", 00:09:07.242 "thin_provision": false, 00:09:07.242 "num_allocated_clusters": 38, 00:09:07.242 "snapshot": false, 00:09:07.242 "clone": false, 00:09:07.242 "esnap_clone": false 00:09:07.242 } 00:09:07.242 } 00:09:07.242 } 00:09:07.242 ] 00:09:07.242 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:07.242 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:07.242 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:07.500 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:07.500 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:07.500 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:07.759 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:07.759 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.018 [2024-12-17 00:24:53.918736] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:08.018 00:24:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:08.277 request: 00:09:08.277 { 00:09:08.277 "uuid": "5dcb7382-9ad8-4a3b-b089-016209cfc4da", 00:09:08.277 "method": "bdev_lvol_get_lvstores", 00:09:08.277 "req_id": 1 00:09:08.277 } 00:09:08.277 Got JSON-RPC error response 00:09:08.277 response: 00:09:08.277 { 00:09:08.277 "code": -19, 00:09:08.277 "message": "No such device" 00:09:08.277 } 00:09:08.277 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:08.277 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:08.277 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:08.277 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:08.277 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.536 aio_bdev 00:09:08.537 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b42954cc-c849-4a52-bb9d-6b6efae88c00 00:09:08.537 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b42954cc-c849-4a52-bb9d-6b6efae88c00 00:09:08.537 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.537 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:08.537 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.537 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.537 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.795 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b42954cc-c849-4a52-bb9d-6b6efae88c00 -t 2000 00:09:09.054 [ 00:09:09.054 { 00:09:09.054 "name": "b42954cc-c849-4a52-bb9d-6b6efae88c00", 00:09:09.054 "aliases": [ 00:09:09.054 "lvs/lvol" 00:09:09.054 ], 00:09:09.054 "product_name": "Logical Volume", 00:09:09.054 "block_size": 4096, 00:09:09.054 "num_blocks": 38912, 00:09:09.054 "uuid": "b42954cc-c849-4a52-bb9d-6b6efae88c00", 00:09:09.054 "assigned_rate_limits": { 00:09:09.054 "rw_ios_per_sec": 0, 00:09:09.054 "rw_mbytes_per_sec": 0, 00:09:09.054 "r_mbytes_per_sec": 0, 00:09:09.054 "w_mbytes_per_sec": 0 00:09:09.054 }, 00:09:09.054 "claimed": false, 00:09:09.054 "zoned": false, 00:09:09.054 "supported_io_types": { 00:09:09.054 "read": true, 00:09:09.054 "write": true, 00:09:09.054 "unmap": true, 00:09:09.054 "flush": false, 00:09:09.054 "reset": true, 00:09:09.054 "nvme_admin": false, 00:09:09.054 "nvme_io": false, 00:09:09.054 "nvme_io_md": false, 00:09:09.054 "write_zeroes": true, 00:09:09.054 "zcopy": false, 00:09:09.054 "get_zone_info": false, 00:09:09.054 "zone_management": false, 00:09:09.054 "zone_append": false, 00:09:09.054 "compare": false, 00:09:09.054 "compare_and_write": false, 00:09:09.054 "abort": false, 00:09:09.054 "seek_hole": true, 00:09:09.054 "seek_data": true, 00:09:09.054 "copy": false, 00:09:09.054 "nvme_iov_md": false 00:09:09.054 }, 00:09:09.054 "driver_specific": { 00:09:09.054 "lvol": { 00:09:09.054 "lvol_store_uuid": "5dcb7382-9ad8-4a3b-b089-016209cfc4da", 00:09:09.054 "base_bdev": "aio_bdev", 00:09:09.054 "thin_provision": false, 00:09:09.054 "num_allocated_clusters": 38, 00:09:09.054 "snapshot": false, 00:09:09.054 "clone": false, 00:09:09.054 "esnap_clone": false 00:09:09.054 } 00:09:09.055 } 00:09:09.055 } 00:09:09.055 ] 00:09:09.055 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:09.055 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:09.055 00:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:09.317 00:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:09.317 00:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:09.317 00:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:09.575 00:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:09.575 00:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b42954cc-c849-4a52-bb9d-6b6efae88c00 00:09:09.833 00:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5dcb7382-9ad8-4a3b-b089-016209cfc4da 00:09:10.092 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.351 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:10.609 00:09:10.609 real 0m20.097s 00:09:10.609 user 0m40.123s 00:09:10.609 sys 0m9.112s 00:09:10.609 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.609 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.609 ************************************ 00:09:10.609 END TEST lvs_grow_dirty 00:09:10.609 ************************************ 00:09:10.868 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:10.868 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:10.868 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:10.868 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:10.868 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:10.868 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:10.869 nvmf_trace.0 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.869 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.869 rmmod nvme_tcp 00:09:10.869 rmmod nvme_fabrics 00:09:11.127 rmmod nvme_keyring 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 75627 ']' 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 75627 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 75627 ']' 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 75627 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75627 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.127 killing process with pid 75627 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75627' 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 75627 00:09:11.127 00:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 75627 00:09:11.127 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:11.127 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:11.128 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:11.387 00:09:11.387 real 0m40.463s 00:09:11.387 user 1m3.447s 00:09:11.387 sys 0m12.363s 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.387 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:11.387 ************************************ 00:09:11.387 END TEST nvmf_lvs_grow 00:09:11.387 ************************************ 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.647 ************************************ 00:09:11.647 START TEST nvmf_bdev_io_wait 00:09:11.647 ************************************ 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:11.647 * Looking for test storage... 00:09:11.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.647 --rc genhtml_branch_coverage=1 00:09:11.647 --rc genhtml_function_coverage=1 00:09:11.647 --rc genhtml_legend=1 00:09:11.647 --rc geninfo_all_blocks=1 00:09:11.647 --rc geninfo_unexecuted_blocks=1 00:09:11.647 00:09:11.647 ' 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.647 --rc genhtml_branch_coverage=1 00:09:11.647 --rc genhtml_function_coverage=1 00:09:11.647 --rc genhtml_legend=1 00:09:11.647 --rc geninfo_all_blocks=1 00:09:11.647 --rc geninfo_unexecuted_blocks=1 00:09:11.647 00:09:11.647 ' 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.647 --rc genhtml_branch_coverage=1 00:09:11.647 --rc genhtml_function_coverage=1 00:09:11.647 --rc genhtml_legend=1 00:09:11.647 --rc geninfo_all_blocks=1 00:09:11.647 --rc geninfo_unexecuted_blocks=1 00:09:11.647 00:09:11.647 ' 00:09:11.647 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:11.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.647 --rc genhtml_branch_coverage=1 00:09:11.647 --rc genhtml_function_coverage=1 00:09:11.648 --rc genhtml_legend=1 00:09:11.648 --rc geninfo_all_blocks=1 00:09:11.648 --rc geninfo_unexecuted_blocks=1 00:09:11.648 00:09:11.648 ' 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:11.648 Cannot find device "nvmf_init_br" 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:11.648 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:11.907 Cannot find device "nvmf_init_br2" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:11.907 Cannot find device "nvmf_tgt_br" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.907 Cannot find device "nvmf_tgt_br2" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:11.907 Cannot find device "nvmf_init_br" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:11.907 Cannot find device "nvmf_init_br2" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:11.907 Cannot find device "nvmf_tgt_br" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:11.907 Cannot find device "nvmf_tgt_br2" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:11.907 Cannot find device "nvmf_br" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:11.907 Cannot find device "nvmf_init_if" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:11.907 Cannot find device "nvmf_init_if2" 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:11.907 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:11.908 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:12.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:12.167 00:09:12.167 --- 10.0.0.3 ping statistics --- 00:09:12.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.167 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:12.167 00:24:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:12.167 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:12.167 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:12.167 00:09:12.167 --- 10.0.0.4 ping statistics --- 00:09:12.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.167 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:12.167 00:09:12.167 --- 10.0.0.1 ping statistics --- 00:09:12.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.167 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:12.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:09:12.167 00:09:12.167 --- 10.0.0.2 ping statistics --- 00:09:12.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.167 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=75991 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 75991 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 75991 ']' 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.167 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.167 [2024-12-17 00:24:58.100958] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:12.167 [2024-12-17 00:24:58.101078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.427 [2024-12-17 00:24:58.240685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.427 [2024-12-17 00:24:58.282413] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.427 [2024-12-17 00:24:58.282479] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.427 [2024-12-17 00:24:58.282493] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.427 [2024-12-17 00:24:58.282505] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.427 [2024-12-17 00:24:58.282514] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.427 [2024-12-17 00:24:58.282690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.427 [2024-12-17 00:24:58.283493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.427 [2024-12-17 00:24:58.283402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.427 [2024-12-17 00:24:58.283483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.427 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.687 [2024-12-17 00:24:58.448860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.687 [2024-12-17 00:24:58.459920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.687 Malloc0 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.687 [2024-12-17 00:24:58.510260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76024 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76026 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:12.687 { 00:09:12.687 "params": { 00:09:12.687 "name": "Nvme$subsystem", 00:09:12.687 "trtype": "$TEST_TRANSPORT", 00:09:12.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.687 "adrfam": "ipv4", 00:09:12.687 "trsvcid": "$NVMF_PORT", 00:09:12.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.687 "hdgst": ${hdgst:-false}, 00:09:12.687 "ddgst": ${ddgst:-false} 00:09:12.687 }, 00:09:12.687 "method": "bdev_nvme_attach_controller" 00:09:12.687 } 00:09:12.687 EOF 00:09:12.687 )") 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76028 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:12.687 { 00:09:12.687 "params": { 00:09:12.687 "name": "Nvme$subsystem", 00:09:12.687 "trtype": "$TEST_TRANSPORT", 00:09:12.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.687 "adrfam": "ipv4", 00:09:12.687 "trsvcid": "$NVMF_PORT", 00:09:12.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.687 "hdgst": ${hdgst:-false}, 00:09:12.687 "ddgst": ${ddgst:-false} 00:09:12.687 }, 00:09:12.687 "method": "bdev_nvme_attach_controller" 00:09:12.687 } 00:09:12.687 EOF 00:09:12.687 )") 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76031 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:12.687 { 00:09:12.687 "params": { 00:09:12.687 "name": "Nvme$subsystem", 00:09:12.687 "trtype": "$TEST_TRANSPORT", 00:09:12.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.687 "adrfam": "ipv4", 00:09:12.687 "trsvcid": "$NVMF_PORT", 00:09:12.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.687 "hdgst": ${hdgst:-false}, 00:09:12.687 "ddgst": ${ddgst:-false} 00:09:12.687 }, 00:09:12.687 "method": "bdev_nvme_attach_controller" 00:09:12.687 } 00:09:12.687 EOF 00:09:12.687 )") 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:12.687 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:12.688 { 00:09:12.688 "params": { 00:09:12.688 "name": "Nvme$subsystem", 00:09:12.688 "trtype": "$TEST_TRANSPORT", 00:09:12.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.688 "adrfam": "ipv4", 00:09:12.688 "trsvcid": "$NVMF_PORT", 00:09:12.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.688 "hdgst": ${hdgst:-false}, 00:09:12.688 "ddgst": ${ddgst:-false} 00:09:12.688 }, 00:09:12.688 "method": "bdev_nvme_attach_controller" 00:09:12.688 } 00:09:12.688 EOF 00:09:12.688 )") 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:12.688 "params": { 00:09:12.688 "name": "Nvme1", 00:09:12.688 "trtype": "tcp", 00:09:12.688 "traddr": "10.0.0.3", 00:09:12.688 "adrfam": "ipv4", 00:09:12.688 "trsvcid": "4420", 00:09:12.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.688 "hdgst": false, 00:09:12.688 "ddgst": false 00:09:12.688 }, 00:09:12.688 "method": "bdev_nvme_attach_controller" 00:09:12.688 }' 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:12.688 "params": { 00:09:12.688 "name": "Nvme1", 00:09:12.688 "trtype": "tcp", 00:09:12.688 "traddr": "10.0.0.3", 00:09:12.688 "adrfam": "ipv4", 00:09:12.688 "trsvcid": "4420", 00:09:12.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.688 "hdgst": false, 00:09:12.688 "ddgst": false 00:09:12.688 }, 00:09:12.688 "method": "bdev_nvme_attach_controller" 00:09:12.688 }' 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:12.688 "params": { 00:09:12.688 "name": "Nvme1", 00:09:12.688 "trtype": "tcp", 00:09:12.688 "traddr": "10.0.0.3", 00:09:12.688 "adrfam": "ipv4", 00:09:12.688 "trsvcid": "4420", 00:09:12.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.688 "hdgst": false, 00:09:12.688 "ddgst": false 00:09:12.688 }, 00:09:12.688 "method": "bdev_nvme_attach_controller" 00:09:12.688 }' 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:12.688 "params": { 00:09:12.688 "name": "Nvme1", 00:09:12.688 "trtype": "tcp", 00:09:12.688 "traddr": "10.0.0.3", 00:09:12.688 "adrfam": "ipv4", 00:09:12.688 "trsvcid": "4420", 00:09:12.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:12.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:12.688 "hdgst": false, 00:09:12.688 "ddgst": false 00:09:12.688 }, 00:09:12.688 "method": "bdev_nvme_attach_controller" 00:09:12.688 }' 00:09:12.688 00:24:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76024 00:09:12.688 [2024-12-17 00:24:58.581644] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:12.688 [2024-12-17 00:24:58.581730] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:12.688 [2024-12-17 00:24:58.587125] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:12.688 [2024-12-17 00:24:58.587201] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:12.688 [2024-12-17 00:24:58.601500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:12.688 [2024-12-17 00:24:58.601578] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:12.688 [2024-12-17 00:24:58.603656] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:12.688 [2024-12-17 00:24:58.603880] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:12.947 [2024-12-17 00:24:58.762519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.947 [2024-12-17 00:24:58.789860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.947 [2024-12-17 00:24:58.803008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.947 [2024-12-17 00:24:58.821566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.947 [2024-12-17 00:24:58.829620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:12.947 [2024-12-17 00:24:58.845935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.947 [2024-12-17 00:24:58.864946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.947 [2024-12-17 00:24:58.873446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.947 [2024-12-17 00:24:58.890644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.947 [2024-12-17 00:24:58.909882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.947 [2024-12-17 00:24:58.917979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.947 Running I/O for 1 seconds... 00:09:13.206 [2024-12-17 00:24:58.953432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.206 Running I/O for 1 seconds... 00:09:13.206 Running I/O for 1 seconds... 00:09:13.206 Running I/O for 1 seconds... 00:09:14.142 9475.00 IOPS, 37.01 MiB/s 00:09:14.142 Latency(us) 00:09:14.142 [2024-12-17T00:25:00.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.142 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:14.142 Nvme1n1 : 1.01 9527.76 37.22 0.00 0.00 13373.43 6672.76 20614.05 00:09:14.142 [2024-12-17T00:25:00.145Z] =================================================================================================================== 00:09:14.142 [2024-12-17T00:25:00.145Z] Total : 9527.76 37.22 0.00 0.00 13373.43 6672.76 20614.05 00:09:14.142 9004.00 IOPS, 35.17 MiB/s 00:09:14.142 Latency(us) 00:09:14.142 [2024-12-17T00:25:00.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.142 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:14.142 Nvme1n1 : 1.01 9071.45 35.44 0.00 0.00 14048.61 7268.54 24665.37 00:09:14.142 [2024-12-17T00:25:00.145Z] =================================================================================================================== 00:09:14.142 [2024-12-17T00:25:00.145Z] Total : 9071.45 35.44 0.00 0.00 14048.61 7268.54 24665.37 00:09:14.142 7234.00 IOPS, 28.26 MiB/s 00:09:14.142 Latency(us) 00:09:14.142 [2024-12-17T00:25:00.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.142 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:14.142 Nvme1n1 : 1.01 7292.39 28.49 0.00 0.00 17461.00 8460.10 31933.91 00:09:14.142 [2024-12-17T00:25:00.145Z] =================================================================================================================== 00:09:14.142 [2024-12-17T00:25:00.145Z] Total : 7292.39 28.49 0.00 0.00 17461.00 8460.10 31933.91 00:09:14.142 165928.00 IOPS, 648.16 MiB/s 00:09:14.142 Latency(us) 00:09:14.142 [2024-12-17T00:25:00.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.142 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:14.142 Nvme1n1 : 1.00 165584.46 646.81 0.00 0.00 768.98 396.57 2055.45 00:09:14.142 [2024-12-17T00:25:00.145Z] =================================================================================================================== 00:09:14.142 [2024-12-17T00:25:00.145Z] Total : 165584.46 646.81 0.00 0.00 768.98 396.57 2055.45 00:09:14.142 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76026 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76028 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76031 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.401 rmmod nvme_tcp 00:09:14.401 rmmod nvme_fabrics 00:09:14.401 rmmod nvme_keyring 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 75991 ']' 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 75991 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 75991 ']' 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 75991 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75991 00:09:14.401 killing process with pid 75991 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75991' 00:09:14.401 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 75991 00:09:14.402 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 75991 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.661 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:14.920 ************************************ 00:09:14.920 END TEST nvmf_bdev_io_wait 00:09:14.920 ************************************ 00:09:14.920 00:09:14.920 real 0m3.289s 00:09:14.920 user 0m13.005s 00:09:14.920 sys 0m2.071s 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.920 ************************************ 00:09:14.920 START TEST nvmf_queue_depth 00:09:14.920 ************************************ 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:14.920 * Looking for test storage... 00:09:14.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.920 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.921 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.180 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:15.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.181 --rc genhtml_branch_coverage=1 00:09:15.181 --rc genhtml_function_coverage=1 00:09:15.181 --rc genhtml_legend=1 00:09:15.181 --rc geninfo_all_blocks=1 00:09:15.181 --rc geninfo_unexecuted_blocks=1 00:09:15.181 00:09:15.181 ' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:15.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.181 --rc genhtml_branch_coverage=1 00:09:15.181 --rc genhtml_function_coverage=1 00:09:15.181 --rc genhtml_legend=1 00:09:15.181 --rc geninfo_all_blocks=1 00:09:15.181 --rc geninfo_unexecuted_blocks=1 00:09:15.181 00:09:15.181 ' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:15.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.181 --rc genhtml_branch_coverage=1 00:09:15.181 --rc genhtml_function_coverage=1 00:09:15.181 --rc genhtml_legend=1 00:09:15.181 --rc geninfo_all_blocks=1 00:09:15.181 --rc geninfo_unexecuted_blocks=1 00:09:15.181 00:09:15.181 ' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:15.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.181 --rc genhtml_branch_coverage=1 00:09:15.181 --rc genhtml_function_coverage=1 00:09:15.181 --rc genhtml_legend=1 00:09:15.181 --rc geninfo_all_blocks=1 00:09:15.181 --rc geninfo_unexecuted_blocks=1 00:09:15.181 00:09:15.181 ' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.181 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.181 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:15.182 Cannot find device "nvmf_init_br" 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:15.182 Cannot find device "nvmf_init_br2" 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:15.182 00:25:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:15.182 Cannot find device "nvmf_tgt_br" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.182 Cannot find device "nvmf_tgt_br2" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:15.182 Cannot find device "nvmf_init_br" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:15.182 Cannot find device "nvmf_init_br2" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:15.182 Cannot find device "nvmf_tgt_br" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:15.182 Cannot find device "nvmf_tgt_br2" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:15.182 Cannot find device "nvmf_br" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:15.182 Cannot find device "nvmf_init_if" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:15.182 Cannot find device "nvmf_init_if2" 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:15.182 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:15.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:15.463 00:09:15.463 --- 10.0.0.3 ping statistics --- 00:09:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.463 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:15.463 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:15.463 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:09:15.463 00:09:15.463 --- 10.0.0.4 ping statistics --- 00:09:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.463 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:15.463 00:09:15.463 --- 10.0.0.1 ping statistics --- 00:09:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.463 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:15.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:15.463 00:09:15.463 --- 10.0.0.2 ping statistics --- 00:09:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.463 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=76279 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 76279 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76279 ']' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.463 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.463 [2024-12-17 00:25:01.408003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:15.463 [2024-12-17 00:25:01.408374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.725 [2024-12-17 00:25:01.552896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.725 [2024-12-17 00:25:01.593341] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.725 [2024-12-17 00:25:01.593615] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.725 [2024-12-17 00:25:01.593785] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.725 [2024-12-17 00:25:01.593852] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.725 [2024-12-17 00:25:01.593960] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.725 [2024-12-17 00:25:01.594037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.725 [2024-12-17 00:25:01.626060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.725 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.725 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:15.725 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:15.725 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.725 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 [2024-12-17 00:25:01.759601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 Malloc0 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 [2024-12-17 00:25:01.812835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:15.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=76309 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 76309 /var/tmp/bdevperf.sock 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76309 ']' 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.984 00:25:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 [2024-12-17 00:25:01.872791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:15.984 [2024-12-17 00:25:01.872888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76309 ] 00:09:16.243 [2024-12-17 00:25:02.010703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.243 [2024-12-17 00:25:02.061256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.243 [2024-12-17 00:25:02.101813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.243 00:25:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.243 00:25:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:16.243 00:25:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:16.243 00:25:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.243 00:25:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.502 NVMe0n1 00:09:16.502 00:25:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.502 00:25:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:16.502 Running I/O for 10 seconds... 00:09:18.816 7175.00 IOPS, 28.03 MiB/s [2024-12-17T00:25:05.387Z] 7682.00 IOPS, 30.01 MiB/s [2024-12-17T00:25:06.774Z] 7893.67 IOPS, 30.83 MiB/s [2024-12-17T00:25:07.709Z] 8202.75 IOPS, 32.04 MiB/s [2024-12-17T00:25:08.645Z] 8378.20 IOPS, 32.73 MiB/s [2024-12-17T00:25:09.581Z] 8393.00 IOPS, 32.79 MiB/s [2024-12-17T00:25:10.517Z] 8459.29 IOPS, 33.04 MiB/s [2024-12-17T00:25:11.454Z] 8496.12 IOPS, 33.19 MiB/s [2024-12-17T00:25:12.390Z] 8539.78 IOPS, 33.36 MiB/s [2024-12-17T00:25:12.649Z] 8581.00 IOPS, 33.52 MiB/s 00:09:26.646 Latency(us) 00:09:26.646 [2024-12-17T00:25:12.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.646 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:26.646 Verification LBA range: start 0x0 length 0x4000 00:09:26.646 NVMe0n1 : 10.10 8596.18 33.58 0.00 0.00 118501.77 23950.43 89605.59 00:09:26.646 [2024-12-17T00:25:12.649Z] =================================================================================================================== 00:09:26.646 [2024-12-17T00:25:12.649Z] Total : 8596.18 33.58 0.00 0.00 118501.77 23950.43 89605.59 00:09:26.646 { 00:09:26.646 "results": [ 00:09:26.646 { 00:09:26.646 "job": "NVMe0n1", 00:09:26.646 "core_mask": "0x1", 00:09:26.646 "workload": "verify", 00:09:26.646 "status": "finished", 00:09:26.646 "verify_range": { 00:09:26.646 "start": 0, 00:09:26.646 "length": 16384 00:09:26.646 }, 00:09:26.646 "queue_depth": 1024, 00:09:26.646 "io_size": 4096, 00:09:26.646 "runtime": 10.101458, 00:09:26.646 "iops": 8596.184827972358, 00:09:26.646 "mibps": 33.578846984267024, 00:09:26.646 "io_failed": 0, 00:09:26.646 "io_timeout": 0, 00:09:26.646 "avg_latency_us": 118501.77414443859, 00:09:26.646 "min_latency_us": 23950.429090909092, 00:09:26.646 "max_latency_us": 89605.58545454545 00:09:26.646 } 00:09:26.646 ], 00:09:26.646 "core_count": 1 00:09:26.646 } 00:09:26.646 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 76309 00:09:26.646 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76309 ']' 00:09:26.646 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76309 00:09:26.646 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:26.646 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.646 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76309 00:09:26.646 killing process with pid 76309 00:09:26.646 Received shutdown signal, test time was about 10.000000 seconds 00:09:26.646 00:09:26.646 Latency(us) 00:09:26.646 [2024-12-17T00:25:12.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.646 [2024-12-17T00:25:12.650Z] =================================================================================================================== 00:09:26.647 [2024-12-17T00:25:12.650Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:26.647 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.647 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.647 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76309' 00:09:26.647 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76309 00:09:26.647 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76309 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.917 rmmod nvme_tcp 00:09:26.917 rmmod nvme_fabrics 00:09:26.917 rmmod nvme_keyring 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:26.917 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 76279 ']' 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 76279 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76279 ']' 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76279 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76279 00:09:26.918 killing process with pid 76279 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76279' 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76279 00:09:26.918 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76279 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:27.188 00:25:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:27.188 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:27.447 00:09:27.447 real 0m12.487s 00:09:27.447 user 0m21.386s 00:09:27.447 sys 0m2.165s 00:09:27.447 ************************************ 00:09:27.447 END TEST nvmf_queue_depth 00:09:27.447 ************************************ 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 ************************************ 00:09:27.447 START TEST nvmf_target_multipath 00:09:27.447 ************************************ 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:27.447 * Looking for test storage... 00:09:27.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:27.447 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.707 --rc genhtml_branch_coverage=1 00:09:27.707 --rc genhtml_function_coverage=1 00:09:27.707 --rc genhtml_legend=1 00:09:27.707 --rc geninfo_all_blocks=1 00:09:27.707 --rc geninfo_unexecuted_blocks=1 00:09:27.707 00:09:27.707 ' 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.707 --rc genhtml_branch_coverage=1 00:09:27.707 --rc genhtml_function_coverage=1 00:09:27.707 --rc genhtml_legend=1 00:09:27.707 --rc geninfo_all_blocks=1 00:09:27.707 --rc geninfo_unexecuted_blocks=1 00:09:27.707 00:09:27.707 ' 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.707 --rc genhtml_branch_coverage=1 00:09:27.707 --rc genhtml_function_coverage=1 00:09:27.707 --rc genhtml_legend=1 00:09:27.707 --rc geninfo_all_blocks=1 00:09:27.707 --rc geninfo_unexecuted_blocks=1 00:09:27.707 00:09:27.707 ' 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:27.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.707 --rc genhtml_branch_coverage=1 00:09:27.707 --rc genhtml_function_coverage=1 00:09:27.707 --rc genhtml_legend=1 00:09:27.707 --rc geninfo_all_blocks=1 00:09:27.707 --rc geninfo_unexecuted_blocks=1 00:09:27.707 00:09:27.707 ' 00:09:27.707 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.708 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:27.708 Cannot find device "nvmf_init_br" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:27.708 Cannot find device "nvmf_init_br2" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:27.708 Cannot find device "nvmf_tgt_br" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.708 Cannot find device "nvmf_tgt_br2" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:27.708 Cannot find device "nvmf_init_br" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:27.708 Cannot find device "nvmf_init_br2" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:27.708 Cannot find device "nvmf_tgt_br" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:27.708 Cannot find device "nvmf_tgt_br2" 00:09:27.708 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:27.709 Cannot find device "nvmf_br" 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:27.709 Cannot find device "nvmf_init_if" 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:27.709 Cannot find device "nvmf_init_if2" 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.709 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:27.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:09:27.969 00:09:27.969 --- 10.0.0.3 ping statistics --- 00:09:27.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.969 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:27.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:27.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:09:27.969 00:09:27.969 --- 10.0.0.4 ping statistics --- 00:09:27.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.969 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:27.969 00:09:27.969 --- 10.0.0.1 ping statistics --- 00:09:27.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.969 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:27.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:27.969 00:09:27.969 --- 10.0.0.2 ping statistics --- 00:09:27.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.969 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:27.969 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=76678 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 76678 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 76678 ']' 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.970 00:25:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.970 [2024-12-17 00:25:13.965319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:27.970 [2024-12-17 00:25:13.965447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.229 [2024-12-17 00:25:14.104335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.229 [2024-12-17 00:25:14.148890] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.229 [2024-12-17 00:25:14.148979] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.229 [2024-12-17 00:25:14.149004] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.229 [2024-12-17 00:25:14.149014] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.229 [2024-12-17 00:25:14.149023] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.229 [2024-12-17 00:25:14.149491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.229 [2024-12-17 00:25:14.149545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.229 [2024-12-17 00:25:14.149763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.229 [2024-12-17 00:25:14.149779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.229 [2024-12-17 00:25:14.183782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.487 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.487 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:28.487 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:28.487 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.487 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.487 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.487 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:28.746 [2024-12-17 00:25:14.587734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.746 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:29.004 Malloc0 00:09:29.004 00:25:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:29.276 00:25:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.541 00:25:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:29.801 [2024-12-17 00:25:15.695126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:29.801 00:25:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:30.060 [2024-12-17 00:25:15.987417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:30.060 00:25:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:30.319 00:25:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:30.319 00:25:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.319 00:25:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.319 00:25:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.319 00:25:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.319 00:25:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:32.852 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=76760 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:32.853 00:25:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:32.853 [global] 00:09:32.853 thread=1 00:09:32.853 invalidate=1 00:09:32.853 rw=randrw 00:09:32.853 time_based=1 00:09:32.853 runtime=6 00:09:32.853 ioengine=libaio 00:09:32.853 direct=1 00:09:32.853 bs=4096 00:09:32.853 iodepth=128 00:09:32.853 norandommap=0 00:09:32.853 numjobs=1 00:09:32.853 00:09:32.853 verify_dump=1 00:09:32.853 verify_backlog=512 00:09:32.853 verify_state_save=0 00:09:32.853 do_verify=1 00:09:32.853 verify=crc32c-intel 00:09:32.853 [job0] 00:09:32.853 filename=/dev/nvme0n1 00:09:32.853 Could not set queue depth (nvme0n1) 00:09:32.853 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.853 fio-3.35 00:09:32.853 Starting 1 thread 00:09:33.420 00:25:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:33.988 00:25:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:34.246 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:34.247 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:34.505 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:34.764 00:25:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 76760 00:09:38.954 00:09:38.954 job0: (groupid=0, jobs=1): err= 0: pid=76781: Tue Dec 17 00:25:24 2024 00:09:38.954 read: IOPS=10.7k, BW=41.6MiB/s (43.6MB/s)(250MiB/6002msec) 00:09:38.954 slat (usec): min=5, max=8585, avg=55.78, stdev=221.05 00:09:38.954 clat (usec): min=640, max=16441, avg=8195.69, stdev=1492.14 00:09:38.954 lat (usec): min=652, max=16498, avg=8251.47, stdev=1497.11 00:09:38.954 clat percentiles (usec): 00:09:38.954 | 1.00th=[ 4293], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7373], 00:09:38.954 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8291], 00:09:38.954 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[11469], 00:09:38.954 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14615], 99.95th=[14746], 00:09:38.954 | 99.99th=[15533] 00:09:38.954 bw ( KiB/s): min= 5456, max=29416, per=51.99%, avg=22160.73, stdev=6555.36, samples=11 00:09:38.954 iops : min= 1364, max= 7354, avg=5540.18, stdev=1638.84, samples=11 00:09:38.954 write: IOPS=6308, BW=24.6MiB/s (25.8MB/s)(131MiB/5318msec); 0 zone resets 00:09:38.954 slat (usec): min=14, max=2572, avg=62.74, stdev=151.21 00:09:38.954 clat (usec): min=1772, max=15062, avg=7072.56, stdev=1300.67 00:09:38.954 lat (usec): min=1813, max=15098, avg=7135.30, stdev=1304.29 00:09:38.954 clat percentiles (usec): 00:09:38.954 | 1.00th=[ 3228], 5.00th=[ 4178], 10.00th=[ 5669], 20.00th=[ 6521], 00:09:38.954 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7373], 00:09:38.954 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:09:38.954 | 99.00th=[10945], 99.50th=[11863], 99.90th=[12911], 99.95th=[13435], 00:09:38.954 | 99.99th=[13960] 00:09:38.954 bw ( KiB/s): min= 5904, max=28592, per=87.95%, avg=22194.18, stdev=6263.62, samples=11 00:09:38.954 iops : min= 1476, max= 7148, avg=5548.55, stdev=1565.91, samples=11 00:09:38.954 lat (usec) : 750=0.01%, 1000=0.01% 00:09:38.954 lat (msec) : 2=0.03%, 4=1.82%, 10=92.10%, 20=6.04% 00:09:38.954 cpu : usr=5.90%, sys=21.40%, ctx=5759, majf=0, minf=90 00:09:38.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:38.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.954 issued rwts: total=63961,33551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.954 00:09:38.954 Run status group 0 (all jobs): 00:09:38.954 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=250MiB (262MB), run=6002-6002msec 00:09:38.954 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=131MiB (137MB), run=5318-5318msec 00:09:38.954 00:09:38.954 Disk stats (read/write): 00:09:38.954 nvme0n1: ios=63003/33015, merge=0/0, ticks=494230/218728, in_queue=712958, util=98.55% 00:09:38.954 00:25:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:38.954 00:25:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76867 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:39.214 00:25:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:39.481 [global] 00:09:39.481 thread=1 00:09:39.481 invalidate=1 00:09:39.481 rw=randrw 00:09:39.481 time_based=1 00:09:39.481 runtime=6 00:09:39.481 ioengine=libaio 00:09:39.481 direct=1 00:09:39.481 bs=4096 00:09:39.481 iodepth=128 00:09:39.481 norandommap=0 00:09:39.481 numjobs=1 00:09:39.481 00:09:39.481 verify_dump=1 00:09:39.481 verify_backlog=512 00:09:39.481 verify_state_save=0 00:09:39.481 do_verify=1 00:09:39.481 verify=crc32c-intel 00:09:39.481 [job0] 00:09:39.481 filename=/dev/nvme0n1 00:09:39.481 Could not set queue depth (nvme0n1) 00:09:39.481 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:39.481 fio-3.35 00:09:39.481 Starting 1 thread 00:09:40.416 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:40.675 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:40.934 00:25:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:41.193 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:41.452 00:25:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76867 00:09:45.642 00:09:45.642 job0: (groupid=0, jobs=1): err= 0: pid=76888: Tue Dec 17 00:25:31 2024 00:09:45.642 read: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(268MiB/6002msec) 00:09:45.642 slat (usec): min=2, max=5897, avg=43.82, stdev=198.39 00:09:45.642 clat (usec): min=337, max=14872, avg=7754.42, stdev=1913.53 00:09:45.642 lat (usec): min=358, max=14885, avg=7798.24, stdev=1929.95 00:09:45.642 clat percentiles (usec): 00:09:45.642 | 1.00th=[ 3228], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 6063], 00:09:45.642 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:45.642 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11338], 00:09:45.642 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14091], 99.95th=[14484], 00:09:45.642 | 99.99th=[14877] 00:09:45.642 bw ( KiB/s): min=15536, max=35216, per=52.05%, avg=23764.55, stdev=6114.38, samples=11 00:09:45.642 iops : min= 3884, max= 8804, avg=5941.09, stdev=1528.57, samples=11 00:09:45.642 write: IOPS=6522, BW=25.5MiB/s (26.7MB/s)(138MiB/5414msec); 0 zone resets 00:09:45.642 slat (usec): min=4, max=3140, avg=52.92, stdev=137.80 00:09:45.642 clat (usec): min=1267, max=14782, avg=6523.66, stdev=1848.58 00:09:45.642 lat (usec): min=1287, max=14816, avg=6576.57, stdev=1865.17 00:09:45.642 clat percentiles (usec): 00:09:45.642 | 1.00th=[ 2638], 5.00th=[ 3425], 10.00th=[ 3884], 20.00th=[ 4490], 00:09:45.642 | 30.00th=[ 5211], 40.00th=[ 6521], 50.00th=[ 7177], 60.00th=[ 7504], 00:09:45.642 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:09:45.642 | 99.00th=[11076], 99.50th=[11731], 99.90th=[12780], 99.95th=[13304], 00:09:45.642 | 99.99th=[14353] 00:09:45.642 bw ( KiB/s): min=16384, max=34608, per=91.14%, avg=23777.00, stdev=5920.64, samples=11 00:09:45.642 iops : min= 4096, max= 8652, avg=5944.18, stdev=1480.13, samples=11 00:09:45.642 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:45.642 lat (msec) : 2=0.16%, 4=5.66%, 10=89.36%, 20=4.80% 00:09:45.642 cpu : usr=5.61%, sys=21.54%, ctx=5769, majf=0, minf=90 00:09:45.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:45.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.642 issued rwts: total=68509,35312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.642 00:09:45.642 Run status group 0 (all jobs): 00:09:45.642 READ: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=268MiB (281MB), run=6002-6002msec 00:09:45.642 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=138MiB (145MB), run=5414-5414msec 00:09:45.642 00:09:45.642 Disk stats (read/write): 00:09:45.642 nvme0n1: ios=67593/34800, merge=0/0, ticks=501413/211921, in_queue=713334, util=98.65% 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:45.642 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.210 rmmod nvme_tcp 00:09:46.210 rmmod nvme_fabrics 00:09:46.210 rmmod nvme_keyring 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 76678 ']' 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 76678 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 76678 ']' 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 76678 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.210 00:25:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76678 00:09:46.210 killing process with pid 76678 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76678' 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 76678 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 76678 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.210 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:46.469 00:09:46.469 real 0m19.139s 00:09:46.469 user 1m11.088s 00:09:46.469 sys 0m9.805s 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.469 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.469 ************************************ 00:09:46.469 END TEST nvmf_target_multipath 00:09:46.469 ************************************ 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.730 ************************************ 00:09:46.730 START TEST nvmf_zcopy 00:09:46.730 ************************************ 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.730 * Looking for test storage... 00:09:46.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.730 --rc genhtml_branch_coverage=1 00:09:46.730 --rc genhtml_function_coverage=1 00:09:46.730 --rc genhtml_legend=1 00:09:46.730 --rc geninfo_all_blocks=1 00:09:46.730 --rc geninfo_unexecuted_blocks=1 00:09:46.730 00:09:46.730 ' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.730 --rc genhtml_branch_coverage=1 00:09:46.730 --rc genhtml_function_coverage=1 00:09:46.730 --rc genhtml_legend=1 00:09:46.730 --rc geninfo_all_blocks=1 00:09:46.730 --rc geninfo_unexecuted_blocks=1 00:09:46.730 00:09:46.730 ' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.730 --rc genhtml_branch_coverage=1 00:09:46.730 --rc genhtml_function_coverage=1 00:09:46.730 --rc genhtml_legend=1 00:09:46.730 --rc geninfo_all_blocks=1 00:09:46.730 --rc geninfo_unexecuted_blocks=1 00:09:46.730 00:09:46.730 ' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:46.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.730 --rc genhtml_branch_coverage=1 00:09:46.730 --rc genhtml_function_coverage=1 00:09:46.730 --rc genhtml_legend=1 00:09:46.730 --rc geninfo_all_blocks=1 00:09:46.730 --rc geninfo_unexecuted_blocks=1 00:09:46.730 00:09:46.730 ' 00:09:46.730 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.731 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:46.731 Cannot find device "nvmf_init_br" 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:46.731 Cannot find device "nvmf_init_br2" 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:46.731 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:46.990 Cannot find device "nvmf_tgt_br" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.990 Cannot find device "nvmf_tgt_br2" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:46.990 Cannot find device "nvmf_init_br" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:46.990 Cannot find device "nvmf_init_br2" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:46.990 Cannot find device "nvmf_tgt_br" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:46.990 Cannot find device "nvmf_tgt_br2" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:46.990 Cannot find device "nvmf_br" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:46.990 Cannot find device "nvmf_init_if" 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:46.990 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:46.990 Cannot find device "nvmf_init_if2" 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:46.991 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.250 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:47.250 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:47.250 00:25:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:47.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:47.250 00:09:47.250 --- 10.0.0.3 ping statistics --- 00:09:47.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.250 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:47.250 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:47.250 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:09:47.250 00:09:47.250 --- 10.0.0.4 ping statistics --- 00:09:47.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.250 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:47.250 00:09:47.250 --- 10.0.0.1 ping statistics --- 00:09:47.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.250 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:47.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:09:47.250 00:09:47.250 --- 10.0.0.2 ping statistics --- 00:09:47.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.250 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=77185 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:47.250 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 77185 00:09:47.251 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 77185 ']' 00:09:47.251 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.251 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.251 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.251 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.251 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.251 [2024-12-17 00:25:33.151892] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:47.251 [2024-12-17 00:25:33.151994] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.510 [2024-12-17 00:25:33.284463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.510 [2024-12-17 00:25:33.317739] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.510 [2024-12-17 00:25:33.317806] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.510 [2024-12-17 00:25:33.317832] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.510 [2024-12-17 00:25:33.317839] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.510 [2024-12-17 00:25:33.317846] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.510 [2024-12-17 00:25:33.317870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.510 [2024-12-17 00:25:33.344954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.510 [2024-12-17 00:25:33.483900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.510 [2024-12-17 00:25:33.500082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.510 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.769 malloc0 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:47.769 { 00:09:47.769 "params": { 00:09:47.769 "name": "Nvme$subsystem", 00:09:47.769 "trtype": "$TEST_TRANSPORT", 00:09:47.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.769 "adrfam": "ipv4", 00:09:47.769 "trsvcid": "$NVMF_PORT", 00:09:47.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.769 "hdgst": ${hdgst:-false}, 00:09:47.769 "ddgst": ${ddgst:-false} 00:09:47.769 }, 00:09:47.769 "method": "bdev_nvme_attach_controller" 00:09:47.769 } 00:09:47.769 EOF 00:09:47.769 )") 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:47.769 00:25:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:47.769 "params": { 00:09:47.769 "name": "Nvme1", 00:09:47.769 "trtype": "tcp", 00:09:47.769 "traddr": "10.0.0.3", 00:09:47.769 "adrfam": "ipv4", 00:09:47.769 "trsvcid": "4420", 00:09:47.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.769 "hdgst": false, 00:09:47.769 "ddgst": false 00:09:47.769 }, 00:09:47.769 "method": "bdev_nvme_attach_controller" 00:09:47.769 }' 00:09:47.769 [2024-12-17 00:25:33.582675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:47.769 [2024-12-17 00:25:33.582778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77216 ] 00:09:47.769 [2024-12-17 00:25:33.718056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.769 [2024-12-17 00:25:33.758798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.028 [2024-12-17 00:25:33.800744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.028 Running I/O for 10 seconds... 00:09:50.341 6264.00 IOPS, 48.94 MiB/s [2024-12-17T00:25:36.912Z] 6383.50 IOPS, 49.87 MiB/s [2024-12-17T00:25:38.315Z] 6376.00 IOPS, 49.81 MiB/s [2024-12-17T00:25:39.252Z] 6337.75 IOPS, 49.51 MiB/s [2024-12-17T00:25:40.188Z] 6310.60 IOPS, 49.30 MiB/s [2024-12-17T00:25:41.124Z] 6339.17 IOPS, 49.52 MiB/s [2024-12-17T00:25:42.060Z] 6367.29 IOPS, 49.74 MiB/s [2024-12-17T00:25:42.996Z] 6390.38 IOPS, 49.92 MiB/s [2024-12-17T00:25:43.932Z] 6375.33 IOPS, 49.81 MiB/s [2024-12-17T00:25:43.932Z] 6383.60 IOPS, 49.87 MiB/s 00:09:57.929 Latency(us) 00:09:57.929 [2024-12-17T00:25:43.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.929 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:57.929 Verification LBA range: start 0x0 length 0x1000 00:09:57.929 Nvme1n1 : 10.01 6383.30 49.87 0.00 0.00 19987.66 1735.21 33602.09 00:09:57.929 [2024-12-17T00:25:43.932Z] =================================================================================================================== 00:09:57.929 [2024-12-17T00:25:43.932Z] Total : 6383.30 49.87 0.00 0.00 19987.66 1735.21 33602.09 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=77328 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:58.188 { 00:09:58.188 "params": { 00:09:58.188 "name": "Nvme$subsystem", 00:09:58.188 "trtype": "$TEST_TRANSPORT", 00:09:58.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.188 "adrfam": "ipv4", 00:09:58.188 "trsvcid": "$NVMF_PORT", 00:09:58.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.188 "hdgst": ${hdgst:-false}, 00:09:58.188 "ddgst": ${ddgst:-false} 00:09:58.188 }, 00:09:58.188 "method": "bdev_nvme_attach_controller" 00:09:58.188 } 00:09:58.188 EOF 00:09:58.188 )") 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:58.188 [2024-12-17 00:25:44.066016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.188 [2024-12-17 00:25:44.066090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:58.188 00:25:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:58.188 "params": { 00:09:58.188 "name": "Nvme1", 00:09:58.188 "trtype": "tcp", 00:09:58.188 "traddr": "10.0.0.3", 00:09:58.188 "adrfam": "ipv4", 00:09:58.188 "trsvcid": "4420", 00:09:58.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:58.188 "hdgst": false, 00:09:58.188 "ddgst": false 00:09:58.188 }, 00:09:58.188 "method": "bdev_nvme_attach_controller" 00:09:58.188 }' 00:09:58.188 [2024-12-17 00:25:44.077940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.188 [2024-12-17 00:25:44.077991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.188 [2024-12-17 00:25:44.085927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.188 [2024-12-17 00:25:44.085975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.188 [2024-12-17 00:25:44.097943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.188 [2024-12-17 00:25:44.098004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.188 [2024-12-17 00:25:44.109921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.188 [2024-12-17 00:25:44.109961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.188 [2024-12-17 00:25:44.121941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.188 [2024-12-17 00:25:44.122013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.188 [2024-12-17 00:25:44.123636] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:58.188 [2024-12-17 00:25:44.123794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77328 ] 00:09:58.188 [2024-12-17 00:25:44.133916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-12-17 00:25:44.133963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-12-17 00:25:44.145930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-12-17 00:25:44.145972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-12-17 00:25:44.157910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-12-17 00:25:44.157950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-12-17 00:25:44.169978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-12-17 00:25:44.170022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-12-17 00:25:44.181930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-12-17 00:25:44.181971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.193976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.194017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.205940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.205978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.217950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.217995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.229948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.229990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.241950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.241990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.253951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.253974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.265026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.448 [2024-12-17 00:25:44.265960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.265997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.277983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.278029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.290012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.290061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.297739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.448 [2024-12-17 00:25:44.301990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.302028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.314038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.314097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.326031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.326086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.335381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.448 [2024-12-17 00:25:44.338023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.338065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.350028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.350080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.362031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.362087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.374066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.374119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.386063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.386109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.398130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.398197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.410128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.410193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-12-17 00:25:44.422137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.422188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 Running I/O for 5 seconds... 00:09:58.448 [2024-12-17 00:25:44.434154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-12-17 00:25:44.434221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-12-17 00:25:44.453505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-12-17 00:25:44.453566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-12-17 00:25:44.470920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-12-17 00:25:44.470986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-12-17 00:25:44.486179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-12-17 00:25:44.486240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.501372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.501436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.517242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.517287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.534926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.534975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.549964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.550009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.564858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.564903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.580864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.580908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.597514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.597558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.613513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.613557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.630184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.630229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.646846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.646891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.662674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.662717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.679750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.679795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.708 [2024-12-17 00:25:44.695612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.708 [2024-12-17 00:25:44.695657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.712659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.712704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.729519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.729564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.744860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.744904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.760870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.760915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.777252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.777296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.795184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.795229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.810943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.810988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.828741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.828785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.844404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.844435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.855735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.855778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.872715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.872761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.888649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.888694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.899832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.899877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.915953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.915997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.932716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.932762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.949189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.949265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.967 [2024-12-17 00:25:44.966679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.967 [2024-12-17 00:25:44.966739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:44.982112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:44.982156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:44.998621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:44.998651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:45.014939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:45.014984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:45.031395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:45.031437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:45.050196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:45.050242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:45.064642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:45.064674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:45.080071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:45.080117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:45.089077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:45.089122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.230 [2024-12-17 00:25:45.106277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.230 [2024-12-17 00:25:45.106338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.124505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.124626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.138521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.138585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.154463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.154530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.170791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.170862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.189932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.190011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.205541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.205602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.214985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.215031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.231 [2024-12-17 00:25:45.230994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.231 [2024-12-17 00:25:45.231059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.246742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.246799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.263669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.263751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.279634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.279704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.297428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.297490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.311892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.311949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.327481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.327528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.345593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.345655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.360642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.360709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.369624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.369678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.384358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.384411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.400310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.400364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.417510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.417556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.433809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.433851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 12218.00 IOPS, 95.45 MiB/s [2024-12-17T00:25:45.498Z] [2024-12-17 00:25:45.450570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.450613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.466382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.466436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.495 [2024-12-17 00:25:45.483816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.495 [2024-12-17 00:25:45.483878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.501887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.501942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.517831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.517897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.536514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.536565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.550597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.550644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.567227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.567276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.583795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.583840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.600920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.600967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.616688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.616734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.625876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.625920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.641901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.641945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.653789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.653817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.668804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.668849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.680118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.680162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.696288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.696344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.711830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.711874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.723905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.723950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.755 [2024-12-17 00:25:45.740502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.755 [2024-12-17 00:25:45.740566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.756869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.756931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.772898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.772978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.783825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.783885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.800740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.800798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.816086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.816148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.826168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.826222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.841497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.841547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.858609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.858677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.873231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.873305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.889502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.889568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.905350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.905435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.923085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.923132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.938536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.938579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.014 [2024-12-17 00:25:45.948561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.014 [2024-12-17 00:25:45.948622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.015 [2024-12-17 00:25:45.964671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.015 [2024-12-17 00:25:45.964715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.015 [2024-12-17 00:25:45.981381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.015 [2024-12-17 00:25:45.981457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.015 [2024-12-17 00:25:45.997874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.015 [2024-12-17 00:25:45.997918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.015 [2024-12-17 00:25:46.015212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.015 [2024-12-17 00:25:46.015258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.031012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.031055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.049459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.049502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.065668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.065713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.084282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.084341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.098421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.098464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.114350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.114393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.130546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.130591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.147652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.147697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.163543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.163587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.181553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.181596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.197844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.197888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.215198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.215244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.232034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.232080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.248739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.248784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.274 [2024-12-17 00:25:46.264720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.274 [2024-12-17 00:25:46.264767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.533 [2024-12-17 00:25:46.283434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.533 [2024-12-17 00:25:46.283480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.533 [2024-12-17 00:25:46.298180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.298245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.313559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.313590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.322686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.322732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.338686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.338745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.354062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.354107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.371845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.371891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.387829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.387874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.404140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.404186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.422258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.422303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 12284.00 IOPS, 95.97 MiB/s [2024-12-17T00:25:46.537Z] [2024-12-17 00:25:46.436800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.436846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.451503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.451548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.468001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.468034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.484377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.484419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.500659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.500713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.517108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.517157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.534 [2024-12-17 00:25:46.534649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.534 [2024-12-17 00:25:46.534743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.549394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.549472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.565553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.565597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.581644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.581689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.599168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.599215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.613375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.613452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.629297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.629369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.647611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.647658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.662525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.662570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.793 [2024-12-17 00:25:46.672333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.793 [2024-12-17 00:25:46.672380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.794 [2024-12-17 00:25:46.687264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.794 [2024-12-17 00:25:46.687318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.794 [2024-12-17 00:25:46.703273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.794 [2024-12-17 00:25:46.703353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.794 [2024-12-17 00:25:46.719402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.794 [2024-12-17 00:25:46.719461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.794 [2024-12-17 00:25:46.736278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.794 [2024-12-17 00:25:46.736337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.794 [2024-12-17 00:25:46.754499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.794 [2024-12-17 00:25:46.754548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.794 [2024-12-17 00:25:46.768927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.794 [2024-12-17 00:25:46.768982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.794 [2024-12-17 00:25:46.783783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.794 [2024-12-17 00:25:46.783837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.800881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.800934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.814260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.814310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.830094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.830146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.846976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.847033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.864318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.864398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.880484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.880535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.897923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.897976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.913398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.913454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.924502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.924558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.940466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.940523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.957345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.957420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.974870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.974924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:46.989055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:46.989111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:47.005306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:47.005388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:47.021958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:47.022009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:47.038451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:47.038497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.053 [2024-12-17 00:25:47.054342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.053 [2024-12-17 00:25:47.054396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.312 [2024-12-17 00:25:47.070495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.312 [2024-12-17 00:25:47.070538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.312 [2024-12-17 00:25:47.087982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.312 [2024-12-17 00:25:47.088027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.312 [2024-12-17 00:25:47.103503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.103547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.115020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.115063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.130689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.130734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.148100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.148145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.163538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.163582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.181975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.182018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.196917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.196961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.207809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.207870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.224111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.224159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.239487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.239534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.256431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.256461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.271920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.271966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.283842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.283886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.313 [2024-12-17 00:25:47.300149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.313 [2024-12-17 00:25:47.300194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.316814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.316861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.332267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.332303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.347948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.347998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.365123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.365170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.382941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.382986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.397092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.397137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.413624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.413671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.429484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.429528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 12274.00 IOPS, 95.89 MiB/s [2024-12-17T00:25:47.575Z] [2024-12-17 00:25:47.447576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.447623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.463150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.463195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.481627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.481673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.496258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.496291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.511096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.511142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.529901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.529959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.545241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.545288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.554570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.554603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.572 [2024-12-17 00:25:47.570266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.572 [2024-12-17 00:25:47.570313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.586555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.586598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.595745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.595789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.610785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.610830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.626396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.626440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.644143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.644212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.660005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.660050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.669494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.669538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.686082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.686115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.702895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.702944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.712219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.712250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.727258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.727304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.743127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.743174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.760378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.760425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.777090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.777136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.794459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.794503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.809694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.809738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.832 [2024-12-17 00:25:47.825034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.832 [2024-12-17 00:25:47.825079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.842683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.842748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.857125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.857173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.873822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.873868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.890093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.890138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.907564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.907609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.923357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.923401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.941857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.941901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.956070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.956118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.971094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.971139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:47.987578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:47.987623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:48.004513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:48.004571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:48.020747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:48.020791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:48.038365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:48.038419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:48.054049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:48.054094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:48.063483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:48.063528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.093 [2024-12-17 00:25:48.080077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.093 [2024-12-17 00:25:48.080122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.097225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.097286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.113830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.113874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.130794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.130838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.147574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.147618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.163077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.163124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.172692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.172737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.187722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.187766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.203742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.203787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.220304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.220362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.237029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.237073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.254523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.254563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.269453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.269488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.285405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.285480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.305351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.305390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.320100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.320147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.329674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.329735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.353 [2024-12-17 00:25:48.344975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.353 [2024-12-17 00:25:48.345029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.612 [2024-12-17 00:25:48.360349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.612 [2024-12-17 00:25:48.360412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.612 [2024-12-17 00:25:48.378579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.612 [2024-12-17 00:25:48.378624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.612 [2024-12-17 00:25:48.393900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.612 [2024-12-17 00:25:48.393955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.612 [2024-12-17 00:25:48.409581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.612 [2024-12-17 00:25:48.409640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.612 [2024-12-17 00:25:48.427285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.612 [2024-12-17 00:25:48.427371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.612 12245.50 IOPS, 95.67 MiB/s [2024-12-17T00:25:48.615Z] [2024-12-17 00:25:48.441962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.612 [2024-12-17 00:25:48.442014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.612 [2024-12-17 00:25:48.457133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.457196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.473311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.473383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.489009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.489065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.507780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.507843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.522114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.522166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.537979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.538025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.555670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.555712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.570291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.570377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.586172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.586228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.613 [2024-12-17 00:25:48.603946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.613 [2024-12-17 00:25:48.604003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.619735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.619790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.637362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.637430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.652604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.652695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.668426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.668487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.686086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.686150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.701492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.701546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.712758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.712809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.728410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.728465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.746094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.746144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.761532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.761590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.777890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.872 [2024-12-17 00:25:48.777949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.872 [2024-12-17 00:25:48.795963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.873 [2024-12-17 00:25:48.796015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.873 [2024-12-17 00:25:48.811735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.873 [2024-12-17 00:25:48.811789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.873 [2024-12-17 00:25:48.828875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.873 [2024-12-17 00:25:48.828933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.873 [2024-12-17 00:25:48.846433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.873 [2024-12-17 00:25:48.846477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.873 [2024-12-17 00:25:48.861907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.873 [2024-12-17 00:25:48.861951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.873 [2024-12-17 00:25:48.871493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.873 [2024-12-17 00:25:48.871537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:48.887648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:48.887695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:48.904288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:48.904359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:48.919388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:48.919431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:48.935516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:48.935561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:48.952175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:48.952244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:48.968289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:48.968346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:48.985255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:48.985306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.002108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.002182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.020793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.020875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.036783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.036874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.052938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.053010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.071376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.071443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.087094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.087152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.105655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.105730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.132 [2024-12-17 00:25:49.120011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.132 [2024-12-17 00:25:49.120069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.135758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.135826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.153565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.153628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.169155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.169220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.185276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.185368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.203586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.203658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.217459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.217516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.233805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.233869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.250360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.250403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.267875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.267923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.283175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.283220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.292712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.292741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.309104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.309152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.324101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.324134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.340343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.340375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.356346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.356377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.374755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.374807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.392 [2024-12-17 00:25:49.390099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.392 [2024-12-17 00:25:49.390149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.406980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.407029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.423631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.423678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 12249.00 IOPS, 95.70 MiB/s [2024-12-17T00:25:49.655Z] [2024-12-17 00:25:49.438866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.438912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 00:10:03.652 Latency(us) 00:10:03.652 [2024-12-17T00:25:49.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.652 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:03.652 Nvme1n1 : 5.01 12248.56 95.69 0.00 0.00 10437.39 3574.69 24188.74 00:10:03.652 [2024-12-17T00:25:49.655Z] =================================================================================================================== 00:10:03.652 [2024-12-17T00:25:49.655Z] Total : 12248.56 95.69 0.00 0.00 10437.39 3574.69 24188.74 00:10:03.652 [2024-12-17 00:25:49.448765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.448811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.460791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.460833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.472864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.472925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.484795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.484846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.496799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.496850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.508847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.508896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.520798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.520863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.532787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.532829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.544803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.544866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.556804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.556867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.568845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.568893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 [2024-12-17 00:25:49.580857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.652 [2024-12-17 00:25:49.580911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.652 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (77328) - No such process 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 77328 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.652 delay0 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.652 00:25:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:03.912 [2024-12-17 00:25:49.763948] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:10.480 Initializing NVMe Controllers 00:10:10.480 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:10.480 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:10.480 Initialization complete. Launching workers. 00:10:10.480 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 89 00:10:10.480 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 33 00:10:10.480 success 266, unsuccessful 110, failed 0 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.480 rmmod nvme_tcp 00:10:10.480 rmmod nvme_fabrics 00:10:10.480 rmmod nvme_keyring 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 77185 ']' 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 77185 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 77185 ']' 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 77185 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77185 00:10:10.480 killing process with pid 77185 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77185' 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 77185 00:10:10.480 00:25:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 77185 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:10.480 00:10:10.480 real 0m23.831s 00:10:10.480 user 0m39.028s 00:10:10.480 sys 0m6.637s 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.480 ************************************ 00:10:10.480 END TEST nvmf_zcopy 00:10:10.480 ************************************ 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.480 ************************************ 00:10:10.480 START TEST nvmf_nmic 00:10:10.480 ************************************ 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.480 * Looking for test storage... 00:10:10.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:10.480 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:10.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.741 --rc genhtml_branch_coverage=1 00:10:10.741 --rc genhtml_function_coverage=1 00:10:10.741 --rc genhtml_legend=1 00:10:10.741 --rc geninfo_all_blocks=1 00:10:10.741 --rc geninfo_unexecuted_blocks=1 00:10:10.741 00:10:10.741 ' 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:10.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.741 --rc genhtml_branch_coverage=1 00:10:10.741 --rc genhtml_function_coverage=1 00:10:10.741 --rc genhtml_legend=1 00:10:10.741 --rc geninfo_all_blocks=1 00:10:10.741 --rc geninfo_unexecuted_blocks=1 00:10:10.741 00:10:10.741 ' 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:10.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.741 --rc genhtml_branch_coverage=1 00:10:10.741 --rc genhtml_function_coverage=1 00:10:10.741 --rc genhtml_legend=1 00:10:10.741 --rc geninfo_all_blocks=1 00:10:10.741 --rc geninfo_unexecuted_blocks=1 00:10:10.741 00:10:10.741 ' 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:10.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.741 --rc genhtml_branch_coverage=1 00:10:10.741 --rc genhtml_function_coverage=1 00:10:10.741 --rc genhtml_legend=1 00:10:10.741 --rc geninfo_all_blocks=1 00:10:10.741 --rc geninfo_unexecuted_blocks=1 00:10:10.741 00:10:10.741 ' 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.741 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.742 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:10.742 Cannot find device "nvmf_init_br" 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:10.742 Cannot find device "nvmf_init_br2" 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:10.742 Cannot find device "nvmf_tgt_br" 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.742 Cannot find device "nvmf_tgt_br2" 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:10.742 Cannot find device "nvmf_init_br" 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:10.742 Cannot find device "nvmf_init_br2" 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:10.742 Cannot find device "nvmf_tgt_br" 00:10:10.742 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:10.743 Cannot find device "nvmf_tgt_br2" 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:10.743 Cannot find device "nvmf_br" 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:10.743 Cannot find device "nvmf_init_if" 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:10.743 Cannot find device "nvmf_init_if2" 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.743 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:11.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:11.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:10:11.002 00:10:11.002 --- 10.0.0.3 ping statistics --- 00:10:11.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.002 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:11.002 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:11.003 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:11.003 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:11.003 00:10:11.003 --- 10.0.0.4 ping statistics --- 00:10:11.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.003 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:11.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:11.003 00:10:11.003 --- 10.0.0.1 ping statistics --- 00:10:11.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.003 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:11.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:10:11.003 00:10:11.003 --- 10.0.0.2 ping statistics --- 00:10:11.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.003 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=77718 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 77718 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 77718 ']' 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.003 00:25:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.262 [2024-12-17 00:25:57.029576] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:11.262 [2024-12-17 00:25:57.029745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.262 [2024-12-17 00:25:57.169614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.262 [2024-12-17 00:25:57.213573] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.262 [2024-12-17 00:25:57.213930] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.262 [2024-12-17 00:25:57.214079] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.262 [2024-12-17 00:25:57.214293] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.262 [2024-12-17 00:25:57.214457] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.262 [2024-12-17 00:25:57.214664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.262 [2024-12-17 00:25:57.214773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.262 [2024-12-17 00:25:57.214848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.262 [2024-12-17 00:25:57.214848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.262 [2024-12-17 00:25:57.248688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 [2024-12-17 00:25:57.347569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 Malloc0 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 [2024-12-17 00:25:57.396346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.521 test case1: single bdev can't be used in multiple subsystems 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 [2024-12-17 00:25:57.420136] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:11.521 [2024-12-17 00:25:57.420188] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:11.521 [2024-12-17 00:25:57.420224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.521 request: 00:10:11.521 { 00:10:11.521 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:11.521 "namespace": { 00:10:11.521 "bdev_name": "Malloc0", 00:10:11.521 "no_auto_visible": false 00:10:11.521 }, 00:10:11.521 "method": "nvmf_subsystem_add_ns", 00:10:11.521 "req_id": 1 00:10:11.521 } 00:10:11.521 Got JSON-RPC error response 00:10:11.521 response: 00:10:11.521 { 00:10:11.521 "code": -32602, 00:10:11.521 "message": "Invalid parameters" 00:10:11.521 } 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:11.521 Adding namespace failed - expected result. 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:11.521 test case2: host connect to nvmf target in multiple paths 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.521 [2024-12-17 00:25:57.432341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.521 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:11.780 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:11.780 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.780 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:11.780 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.780 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:11.780 00:25:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.316 00:25:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.316 00:25:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.316 00:25:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.316 00:25:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.316 00:25:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.316 00:25:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:14.316 00:25:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:14.316 [global] 00:10:14.316 thread=1 00:10:14.316 invalidate=1 00:10:14.316 rw=write 00:10:14.316 time_based=1 00:10:14.316 runtime=1 00:10:14.316 ioengine=libaio 00:10:14.316 direct=1 00:10:14.316 bs=4096 00:10:14.316 iodepth=1 00:10:14.316 norandommap=0 00:10:14.316 numjobs=1 00:10:14.316 00:10:14.316 verify_dump=1 00:10:14.316 verify_backlog=512 00:10:14.316 verify_state_save=0 00:10:14.316 do_verify=1 00:10:14.316 verify=crc32c-intel 00:10:14.316 [job0] 00:10:14.316 filename=/dev/nvme0n1 00:10:14.316 Could not set queue depth (nvme0n1) 00:10:14.316 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.316 fio-3.35 00:10:14.316 Starting 1 thread 00:10:15.251 00:10:15.251 job0: (groupid=0, jobs=1): err= 0: pid=77798: Tue Dec 17 00:26:01 2024 00:10:15.251 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:15.251 slat (nsec): min=11175, max=59167, avg=16239.47, stdev=4396.64 00:10:15.251 clat (usec): min=121, max=6570, avg=169.32, stdev=147.23 00:10:15.251 lat (usec): min=136, max=6585, avg=185.56, stdev=147.59 00:10:15.251 clat percentiles (usec): 00:10:15.251 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:15.251 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:10:15.251 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 202], 00:10:15.251 | 99.00th=[ 229], 99.50th=[ 247], 99.90th=[ 1450], 99.95th=[ 3949], 00:10:15.251 | 99.99th=[ 6587] 00:10:15.251 write: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:10:15.251 slat (usec): min=16, max=108, avg=24.05, stdev= 6.31 00:10:15.251 clat (usec): min=76, max=3608, avg=109.22, stdev=87.09 00:10:15.251 lat (usec): min=93, max=3639, avg=133.27, stdev=88.04 00:10:15.251 clat percentiles (usec): 00:10:15.251 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 90], 00:10:15.251 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 106], 00:10:15.251 | 70.00th=[ 113], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 143], 00:10:15.251 | 99.00th=[ 167], 99.50th=[ 192], 99.90th=[ 1270], 99.95th=[ 2089], 00:10:15.251 | 99.99th=[ 3621] 00:10:15.251 bw ( KiB/s): min=12288, max=12288, per=97.84%, avg=12288.00, stdev= 0.00, samples=1 00:10:15.251 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:15.251 lat (usec) : 100=23.64%, 250=75.98%, 500=0.16%, 750=0.06%, 1000=0.03% 00:10:15.251 lat (msec) : 2=0.05%, 4=0.06%, 10=0.02% 00:10:15.251 cpu : usr=2.80%, sys=9.70%, ctx=6223, majf=0, minf=5 00:10:15.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.251 issued rwts: total=3072,3143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.251 00:10:15.251 Run status group 0 (all jobs): 00:10:15.251 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:15.251 WRITE: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:10:15.251 00:10:15.251 Disk stats (read/write): 00:10:15.251 nvme0n1: ios=2610/3065, merge=0/0, ticks=467/356, in_queue=823, util=90.68% 00:10:15.251 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:15.251 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.251 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.252 rmmod nvme_tcp 00:10:15.252 rmmod nvme_fabrics 00:10:15.252 rmmod nvme_keyring 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 77718 ']' 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 77718 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 77718 ']' 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 77718 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.252 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77718 00:10:15.511 killing process with pid 77718 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77718' 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 77718 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 77718 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:15.511 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:15.770 00:10:15.770 real 0m5.331s 00:10:15.770 user 0m15.504s 00:10:15.770 sys 0m2.379s 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.770 ************************************ 00:10:15.770 END TEST nvmf_nmic 00:10:15.770 ************************************ 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.770 ************************************ 00:10:15.770 START TEST nvmf_fio_target 00:10:15.770 ************************************ 00:10:15.770 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:16.029 * Looking for test storage... 00:10:16.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:16.029 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:16.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.030 --rc genhtml_branch_coverage=1 00:10:16.030 --rc genhtml_function_coverage=1 00:10:16.030 --rc genhtml_legend=1 00:10:16.030 --rc geninfo_all_blocks=1 00:10:16.030 --rc geninfo_unexecuted_blocks=1 00:10:16.030 00:10:16.030 ' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:16.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.030 --rc genhtml_branch_coverage=1 00:10:16.030 --rc genhtml_function_coverage=1 00:10:16.030 --rc genhtml_legend=1 00:10:16.030 --rc geninfo_all_blocks=1 00:10:16.030 --rc geninfo_unexecuted_blocks=1 00:10:16.030 00:10:16.030 ' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:16.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.030 --rc genhtml_branch_coverage=1 00:10:16.030 --rc genhtml_function_coverage=1 00:10:16.030 --rc genhtml_legend=1 00:10:16.030 --rc geninfo_all_blocks=1 00:10:16.030 --rc geninfo_unexecuted_blocks=1 00:10:16.030 00:10:16.030 ' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:16.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.030 --rc genhtml_branch_coverage=1 00:10:16.030 --rc genhtml_function_coverage=1 00:10:16.030 --rc genhtml_legend=1 00:10:16.030 --rc geninfo_all_blocks=1 00:10:16.030 --rc geninfo_unexecuted_blocks=1 00:10:16.030 00:10:16.030 ' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.030 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:16.030 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:16.031 Cannot find device "nvmf_init_br" 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:16.031 Cannot find device "nvmf_init_br2" 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:16.031 Cannot find device "nvmf_tgt_br" 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.031 Cannot find device "nvmf_tgt_br2" 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:16.031 00:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:16.031 Cannot find device "nvmf_init_br" 00:10:16.031 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:16.031 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:16.031 Cannot find device "nvmf_init_br2" 00:10:16.031 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:16.031 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:16.031 Cannot find device "nvmf_tgt_br" 00:10:16.031 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:16.031 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:16.290 Cannot find device "nvmf_tgt_br2" 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:16.290 Cannot find device "nvmf_br" 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:16.290 Cannot find device "nvmf_init_if" 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:16.290 Cannot find device "nvmf_init_if2" 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.290 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:16.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:10:16.550 00:10:16.550 --- 10.0.0.3 ping statistics --- 00:10:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.550 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:16.550 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:16.550 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:10:16.550 00:10:16.550 --- 10.0.0.4 ping statistics --- 00:10:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.550 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:16.550 00:10:16.550 --- 10.0.0.1 ping statistics --- 00:10:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.550 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:16.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:10:16.550 00:10:16.550 --- 10.0.0.2 ping statistics --- 00:10:16.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.550 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=78030 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 78030 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 78030 ']' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.550 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.550 [2024-12-17 00:26:02.393871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:16.550 [2024-12-17 00:26:02.393969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.550 [2024-12-17 00:26:02.529459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.809 [2024-12-17 00:26:02.562483] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.809 [2024-12-17 00:26:02.562550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.809 [2024-12-17 00:26:02.562577] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.809 [2024-12-17 00:26:02.562584] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.809 [2024-12-17 00:26:02.562590] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.809 [2024-12-17 00:26:02.562653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.809 [2024-12-17 00:26:02.562783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.809 [2024-12-17 00:26:02.563481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.809 [2024-12-17 00:26:02.563499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.809 [2024-12-17 00:26:02.591619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.809 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.809 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:16.809 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:16.809 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.809 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.809 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.809 00:26:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:17.068 [2024-12-17 00:26:03.004411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.068 00:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.634 00:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:17.635 00:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.635 00:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:17.635 00:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.893 00:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:17.893 00:26:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.153 00:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:18.153 00:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:18.411 00:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.978 00:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:18.978 00:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.237 00:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:19.237 00:26:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.495 00:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:19.495 00:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:19.753 00:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.753 00:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:19.753 00:26:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.320 00:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:20.320 00:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.320 00:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:20.579 [2024-12-17 00:26:06.569261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:20.838 00:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:20.838 00:26:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:21.096 00:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:21.354 00:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:21.354 00:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:21.354 00:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.354 00:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:21.354 00:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:21.354 00:26:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:23.256 00:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:23.256 00:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:23.256 00:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.256 00:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:23.256 00:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.256 00:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:23.256 00:26:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:23.256 [global] 00:10:23.256 thread=1 00:10:23.256 invalidate=1 00:10:23.256 rw=write 00:10:23.256 time_based=1 00:10:23.256 runtime=1 00:10:23.256 ioengine=libaio 00:10:23.256 direct=1 00:10:23.256 bs=4096 00:10:23.256 iodepth=1 00:10:23.256 norandommap=0 00:10:23.256 numjobs=1 00:10:23.256 00:10:23.256 verify_dump=1 00:10:23.256 verify_backlog=512 00:10:23.256 verify_state_save=0 00:10:23.256 do_verify=1 00:10:23.256 verify=crc32c-intel 00:10:23.256 [job0] 00:10:23.256 filename=/dev/nvme0n1 00:10:23.256 [job1] 00:10:23.256 filename=/dev/nvme0n2 00:10:23.515 [job2] 00:10:23.515 filename=/dev/nvme0n3 00:10:23.515 [job3] 00:10:23.515 filename=/dev/nvme0n4 00:10:23.515 Could not set queue depth (nvme0n1) 00:10:23.515 Could not set queue depth (nvme0n2) 00:10:23.515 Could not set queue depth (nvme0n3) 00:10:23.515 Could not set queue depth (nvme0n4) 00:10:23.515 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.515 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.515 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.515 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.515 fio-3.35 00:10:23.515 Starting 4 threads 00:10:24.929 00:10:24.929 job0: (groupid=0, jobs=1): err= 0: pid=78208: Tue Dec 17 00:26:10 2024 00:10:24.929 read: IOPS=1880, BW=7520KiB/s (7701kB/s)(7528KiB/1001msec) 00:10:24.929 slat (nsec): min=12795, max=47209, avg=15403.03, stdev=3438.83 00:10:24.929 clat (usec): min=160, max=7156, avg=277.67, stdev=185.92 00:10:24.929 lat (usec): min=173, max=7172, avg=293.07, stdev=186.50 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:10:24.929 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:10:24.929 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 330], 95.00th=[ 375], 00:10:24.929 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 3523], 99.95th=[ 7177], 00:10:24.929 | 99.99th=[ 7177] 00:10:24.929 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:24.929 slat (usec): min=18, max=135, avg=22.97, stdev= 6.22 00:10:24.929 clat (usec): min=106, max=3594, avg=192.49, stdev=86.97 00:10:24.929 lat (usec): min=125, max=3631, avg=215.46, stdev=88.90 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 130], 20.00th=[ 176], 00:10:24.929 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:10:24.929 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 241], 00:10:24.929 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 457], 99.95th=[ 474], 00:10:24.929 | 99.99th=[ 3589] 00:10:24.929 bw ( KiB/s): min= 8192, max= 8192, per=25.05%, avg=8192.00, stdev= 0.00, samples=1 00:10:24.929 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:24.929 lat (usec) : 250=66.64%, 500=33.08%, 750=0.13%, 1000=0.03% 00:10:24.929 lat (msec) : 2=0.05%, 4=0.05%, 10=0.03% 00:10:24.929 cpu : usr=2.10%, sys=5.50%, ctx=3930, majf=0, minf=13 00:10:24.929 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.929 issued rwts: total=1882,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.929 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.929 job1: (groupid=0, jobs=1): err= 0: pid=78209: Tue Dec 17 00:26:10 2024 00:10:24.929 read: IOPS=1908, BW=7633KiB/s (7816kB/s)(7648KiB/1002msec) 00:10:24.929 slat (nsec): min=12419, max=41613, avg=14595.93, stdev=2236.23 00:10:24.929 clat (usec): min=154, max=1908, avg=262.21, stdev=48.80 00:10:24.929 lat (usec): min=168, max=1927, avg=276.81, stdev=49.05 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:10:24.929 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 262], 00:10:24.929 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:10:24.929 | 99.00th=[ 379], 99.50th=[ 441], 99.90th=[ 783], 99.95th=[ 1909], 00:10:24.929 | 99.99th=[ 1909] 00:10:24.929 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:10:24.929 slat (usec): min=18, max=116, avg=24.72, stdev= 8.52 00:10:24.929 clat (usec): min=93, max=558, avg=201.75, stdev=26.63 00:10:24.929 lat (usec): min=130, max=584, avg=226.48, stdev=30.18 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 128], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 00:10:24.929 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:10:24.929 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 233], 00:10:24.929 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 433], 99.95th=[ 437], 00:10:24.929 | 99.99th=[ 562] 00:10:24.929 bw ( KiB/s): min= 8192, max= 8192, per=25.05%, avg=8192.00, stdev= 0.00, samples=1 00:10:24.929 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:24.929 lat (usec) : 100=0.03%, 250=66.14%, 500=33.64%, 750=0.15%, 1000=0.03% 00:10:24.929 lat (msec) : 2=0.03% 00:10:24.929 cpu : usr=1.60%, sys=6.09%, ctx=3962, majf=0, minf=5 00:10:24.929 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.929 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.929 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.929 job2: (groupid=0, jobs=1): err= 0: pid=78210: Tue Dec 17 00:26:10 2024 00:10:24.929 read: IOPS=1920, BW=7680KiB/s (7865kB/s)(7688KiB/1001msec) 00:10:24.929 slat (usec): min=12, max=106, avg=17.30, stdev= 5.91 00:10:24.929 clat (usec): min=155, max=602, avg=273.94, stdev=67.10 00:10:24.929 lat (usec): min=170, max=617, avg=291.24, stdev=70.14 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 176], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:10:24.929 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:10:24.929 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 334], 95.00th=[ 465], 00:10:24.929 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 553], 99.95th=[ 603], 00:10:24.929 | 99.99th=[ 603] 00:10:24.929 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:24.929 slat (usec): min=17, max=133, avg=22.23, stdev= 4.90 00:10:24.929 clat (usec): min=108, max=454, avg=188.89, stdev=28.30 00:10:24.929 lat (usec): min=130, max=588, avg=211.12, stdev=29.10 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 117], 5.00th=[ 128], 10.00th=[ 139], 20.00th=[ 180], 00:10:24.929 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:10:24.929 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 227], 00:10:24.929 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 273], 00:10:24.929 | 99.99th=[ 453] 00:10:24.929 bw ( KiB/s): min= 8192, max= 8192, per=25.05%, avg=8192.00, stdev= 0.00, samples=1 00:10:24.929 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:24.929 lat (usec) : 250=70.98%, 500=28.29%, 750=0.73% 00:10:24.929 cpu : usr=1.90%, sys=6.10%, ctx=3971, majf=0, minf=5 00:10:24.929 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.929 issued rwts: total=1922,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.929 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.929 job3: (groupid=0, jobs=1): err= 0: pid=78211: Tue Dec 17 00:26:10 2024 00:10:24.929 read: IOPS=1915, BW=7660KiB/s (7844kB/s)(7668KiB/1001msec) 00:10:24.929 slat (nsec): min=11956, max=94290, avg=15341.26, stdev=4743.73 00:10:24.929 clat (usec): min=154, max=828, avg=261.72, stdev=35.07 00:10:24.929 lat (usec): min=170, max=843, avg=277.06, stdev=36.07 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 217], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 247], 00:10:24.929 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:10:24.929 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 297], 00:10:24.929 | 99.00th=[ 396], 99.50th=[ 474], 99.90th=[ 766], 99.95th=[ 832], 00:10:24.929 | 99.99th=[ 832] 00:10:24.929 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:24.929 slat (usec): min=17, max=204, avg=24.61, stdev= 9.62 00:10:24.929 clat (usec): min=114, max=563, avg=200.93, stdev=22.65 00:10:24.929 lat (usec): min=134, max=655, avg=225.54, stdev=26.53 00:10:24.929 clat percentiles (usec): 00:10:24.929 | 1.00th=[ 130], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:10:24.929 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:24.929 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 235], 00:10:24.930 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 400], 99.95th=[ 441], 00:10:24.930 | 99.99th=[ 562] 00:10:24.930 bw ( KiB/s): min= 8192, max= 8192, per=25.05%, avg=8192.00, stdev= 0.00, samples=1 00:10:24.930 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:24.930 lat (usec) : 250=66.41%, 500=33.44%, 750=0.08%, 1000=0.08% 00:10:24.930 cpu : usr=1.80%, sys=6.20%, ctx=3972, majf=0, minf=13 00:10:24.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.930 issued rwts: total=1917,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.930 00:10:24.930 Run status group 0 (all jobs): 00:10:24.930 READ: bw=29.8MiB/s (31.2MB/s), 7520KiB/s-7680KiB/s (7701kB/s-7865kB/s), io=29.8MiB (31.3MB), run=1001-1002msec 00:10:24.930 WRITE: bw=31.9MiB/s (33.5MB/s), 8176KiB/s-8184KiB/s (8372kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1002msec 00:10:24.930 00:10:24.930 Disk stats (read/write): 00:10:24.930 nvme0n1: ios=1586/1824, merge=0/0, ticks=470/368, in_queue=838, util=87.47% 00:10:24.930 nvme0n2: ios=1584/1889, merge=0/0, ticks=436/407, in_queue=843, util=88.96% 00:10:24.930 nvme0n3: ios=1536/1941, merge=0/0, ticks=426/385, in_queue=811, util=89.14% 00:10:24.930 nvme0n4: ios=1536/1898, merge=0/0, ticks=410/393, in_queue=803, util=89.79% 00:10:24.930 00:26:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:24.930 [global] 00:10:24.930 thread=1 00:10:24.930 invalidate=1 00:10:24.930 rw=randwrite 00:10:24.930 time_based=1 00:10:24.930 runtime=1 00:10:24.930 ioengine=libaio 00:10:24.930 direct=1 00:10:24.930 bs=4096 00:10:24.930 iodepth=1 00:10:24.930 norandommap=0 00:10:24.930 numjobs=1 00:10:24.930 00:10:24.930 verify_dump=1 00:10:24.930 verify_backlog=512 00:10:24.930 verify_state_save=0 00:10:24.930 do_verify=1 00:10:24.930 verify=crc32c-intel 00:10:24.930 [job0] 00:10:24.930 filename=/dev/nvme0n1 00:10:24.930 [job1] 00:10:24.930 filename=/dev/nvme0n2 00:10:24.930 [job2] 00:10:24.930 filename=/dev/nvme0n3 00:10:24.930 [job3] 00:10:24.930 filename=/dev/nvme0n4 00:10:24.930 Could not set queue depth (nvme0n1) 00:10:24.930 Could not set queue depth (nvme0n2) 00:10:24.930 Could not set queue depth (nvme0n3) 00:10:24.930 Could not set queue depth (nvme0n4) 00:10:24.930 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.930 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.930 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.930 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.930 fio-3.35 00:10:24.930 Starting 4 threads 00:10:26.304 00:10:26.304 job0: (groupid=0, jobs=1): err= 0: pid=78264: Tue Dec 17 00:26:11 2024 00:10:26.304 read: IOPS=1854, BW=7417KiB/s (7595kB/s)(7424KiB/1001msec) 00:10:26.304 slat (nsec): min=8307, max=44287, avg=13081.73, stdev=3453.51 00:10:26.304 clat (usec): min=168, max=436, avg=270.80, stdev=28.45 00:10:26.304 lat (usec): min=183, max=451, avg=283.89, stdev=28.77 00:10:26.304 clat percentiles (usec): 00:10:26.304 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:10:26.304 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:10:26.304 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:10:26.304 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 416], 99.95th=[ 437], 00:10:26.304 | 99.99th=[ 437] 00:10:26.304 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:26.304 slat (usec): min=11, max=128, avg=21.48, stdev= 7.03 00:10:26.304 clat (usec): min=103, max=3145, avg=206.42, stdev=106.17 00:10:26.304 lat (usec): min=129, max=3166, avg=227.89, stdev=106.87 00:10:26.304 clat percentiles (usec): 00:10:26.304 | 1.00th=[ 121], 5.00th=[ 135], 10.00th=[ 151], 20.00th=[ 188], 00:10:26.304 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:10:26.304 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:10:26.304 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 2343], 99.95th=[ 2966], 00:10:26.304 | 99.99th=[ 3130] 00:10:26.304 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.304 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.304 lat (usec) : 250=60.86%, 500=39.04%, 750=0.03% 00:10:26.304 lat (msec) : 4=0.08% 00:10:26.304 cpu : usr=1.40%, sys=5.70%, ctx=3928, majf=0, minf=15 00:10:26.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 issued rwts: total=1856,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.305 job1: (groupid=0, jobs=1): err= 0: pid=78265: Tue Dec 17 00:26:11 2024 00:10:26.305 read: IOPS=2882, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:10:26.305 slat (nsec): min=10974, max=42555, avg=13374.65, stdev=3551.44 00:10:26.305 clat (usec): min=136, max=651, avg=170.72, stdev=23.23 00:10:26.305 lat (usec): min=148, max=672, avg=184.10, stdev=23.71 00:10:26.305 clat percentiles (usec): 00:10:26.305 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:26.305 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:26.305 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 204], 00:10:26.305 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 449], 99.95th=[ 619], 00:10:26.305 | 99.99th=[ 652] 00:10:26.305 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:26.305 slat (usec): min=13, max=822, avg=20.66, stdev=17.61 00:10:26.305 clat (usec): min=2, max=2379, avg=128.76, stdev=60.26 00:10:26.305 lat (usec): min=109, max=2410, avg=149.43, stdev=62.91 00:10:26.305 clat percentiles (usec): 00:10:26.305 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 112], 00:10:26.305 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 127], 00:10:26.305 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 159], 00:10:26.305 | 99.00th=[ 247], 99.50th=[ 355], 99.90th=[ 898], 99.95th=[ 930], 00:10:26.305 | 99.99th=[ 2376] 00:10:26.305 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:26.305 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:26.305 lat (usec) : 4=0.03%, 100=1.18%, 250=98.14%, 500=0.44%, 750=0.10% 00:10:26.305 lat (usec) : 1000=0.10% 00:10:26.305 lat (msec) : 4=0.02% 00:10:26.305 cpu : usr=2.10%, sys=8.00%, ctx=5971, majf=0, minf=12 00:10:26.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 issued rwts: total=2885,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.305 job2: (groupid=0, jobs=1): err= 0: pid=78266: Tue Dec 17 00:26:11 2024 00:10:26.305 read: IOPS=1888, BW=7552KiB/s (7734kB/s)(7560KiB/1001msec) 00:10:26.305 slat (nsec): min=8258, max=97405, avg=14358.74, stdev=5239.85 00:10:26.305 clat (usec): min=190, max=442, avg=269.79, stdev=28.11 00:10:26.305 lat (usec): min=203, max=452, avg=284.15, stdev=28.56 00:10:26.305 clat percentiles (usec): 00:10:26.305 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:10:26.305 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:10:26.305 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 322], 00:10:26.305 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 420], 99.95th=[ 445], 00:10:26.305 | 99.99th=[ 445] 00:10:26.305 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:26.305 slat (usec): min=10, max=194, avg=18.99, stdev= 7.41 00:10:26.305 clat (usec): min=112, max=410, avg=203.96, stdev=33.47 00:10:26.305 lat (usec): min=140, max=471, avg=222.94, stdev=32.17 00:10:26.305 clat percentiles (usec): 00:10:26.305 | 1.00th=[ 122], 5.00th=[ 131], 10.00th=[ 145], 20.00th=[ 188], 00:10:26.305 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:10:26.305 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 249], 00:10:26.305 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 310], 00:10:26.305 | 99.99th=[ 412] 00:10:26.305 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:26.305 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:26.305 lat (usec) : 250=61.48%, 500=38.52% 00:10:26.305 cpu : usr=0.70%, sys=6.20%, ctx=3948, majf=0, minf=9 00:10:26.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 issued rwts: total=1890,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.305 job3: (groupid=0, jobs=1): err= 0: pid=78267: Tue Dec 17 00:26:11 2024 00:10:26.305 read: IOPS=2575, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec) 00:10:26.305 slat (nsec): min=11723, max=66188, avg=14584.80, stdev=3799.46 00:10:26.305 clat (usec): min=124, max=866, avg=181.28, stdev=24.42 00:10:26.305 lat (usec): min=153, max=882, avg=195.87, stdev=24.99 00:10:26.305 clat percentiles (usec): 00:10:26.305 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:10:26.305 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 184], 00:10:26.305 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 217], 00:10:26.305 | 99.00th=[ 237], 99.50th=[ 241], 99.90th=[ 388], 99.95th=[ 506], 00:10:26.305 | 99.99th=[ 865] 00:10:26.305 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:26.305 slat (nsec): min=14511, max=84864, avg=20866.83, stdev=5149.75 00:10:26.305 clat (usec): min=100, max=242, avg=137.49, stdev=16.10 00:10:26.305 lat (usec): min=118, max=320, avg=158.35, stdev=17.06 00:10:26.305 clat percentiles (usec): 00:10:26.305 | 1.00th=[ 108], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 125], 00:10:26.305 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:10:26.305 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 167], 00:10:26.305 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 212], 99.95th=[ 225], 00:10:26.305 | 99.99th=[ 243] 00:10:26.305 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:26.305 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:26.305 lat (usec) : 250=99.89%, 500=0.07%, 750=0.02%, 1000=0.02% 00:10:26.305 cpu : usr=1.90%, sys=8.10%, ctx=5651, majf=0, minf=13 00:10:26.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.305 issued rwts: total=2578,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.305 00:10:26.305 Run status group 0 (all jobs): 00:10:26.305 READ: bw=35.9MiB/s (37.7MB/s), 7417KiB/s-11.3MiB/s (7595kB/s-11.8MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:10:26.305 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:26.305 00:10:26.305 Disk stats (read/write): 00:10:26.305 nvme0n1: ios=1586/1888, merge=0/0, ticks=420/385, in_queue=805, util=88.00% 00:10:26.305 nvme0n2: ios=2609/2686, merge=0/0, ticks=483/353, in_queue=836, util=90.42% 00:10:26.305 nvme0n3: ios=1536/1943, merge=0/0, ticks=414/371, in_queue=785, util=89.36% 00:10:26.305 nvme0n4: ios=2346/2560, merge=0/0, ticks=451/371, in_queue=822, util=90.11% 00:10:26.305 00:26:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:26.305 [global] 00:10:26.305 thread=1 00:10:26.305 invalidate=1 00:10:26.305 rw=write 00:10:26.305 time_based=1 00:10:26.305 runtime=1 00:10:26.305 ioengine=libaio 00:10:26.305 direct=1 00:10:26.305 bs=4096 00:10:26.305 iodepth=128 00:10:26.305 norandommap=0 00:10:26.305 numjobs=1 00:10:26.305 00:10:26.305 verify_dump=1 00:10:26.305 verify_backlog=512 00:10:26.305 verify_state_save=0 00:10:26.305 do_verify=1 00:10:26.305 verify=crc32c-intel 00:10:26.305 [job0] 00:10:26.305 filename=/dev/nvme0n1 00:10:26.305 [job1] 00:10:26.305 filename=/dev/nvme0n2 00:10:26.305 [job2] 00:10:26.305 filename=/dev/nvme0n3 00:10:26.305 [job3] 00:10:26.305 filename=/dev/nvme0n4 00:10:26.305 Could not set queue depth (nvme0n1) 00:10:26.305 Could not set queue depth (nvme0n2) 00:10:26.305 Could not set queue depth (nvme0n3) 00:10:26.305 Could not set queue depth (nvme0n4) 00:10:26.305 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.305 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.305 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.305 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.305 fio-3.35 00:10:26.305 Starting 4 threads 00:10:27.683 00:10:27.683 job0: (groupid=0, jobs=1): err= 0: pid=78332: Tue Dec 17 00:26:13 2024 00:10:27.683 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:27.683 slat (usec): min=7, max=7111, avg=157.23, stdev=792.14 00:10:27.683 clat (usec): min=13435, max=23384, avg=19902.45, stdev=1355.72 00:10:27.683 lat (usec): min=17233, max=23400, avg=20059.68, stdev=1143.85 00:10:27.683 clat percentiles (usec): 00:10:27.683 | 1.00th=[15533], 5.00th=[17695], 10.00th=[17957], 20.00th=[19006], 00:10:27.683 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:10:27.683 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21103], 95.00th=[22152], 00:10:27.683 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:10:27.683 | 99.99th=[23462] 00:10:27.683 write: IOPS=3358, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1004msec); 0 zone resets 00:10:27.683 slat (usec): min=13, max=4818, avg=144.73, stdev=678.00 00:10:27.683 clat (usec): min=3229, max=22708, avg=19436.23, stdev=2193.26 00:10:27.683 lat (usec): min=3244, max=22734, avg=19580.96, stdev=2079.59 00:10:27.683 clat percentiles (usec): 00:10:27.683 | 1.00th=[ 8094], 5.00th=[16581], 10.00th=[17695], 20.00th=[19006], 00:10:27.683 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:10:27.683 | 70.00th=[19792], 80.00th=[20317], 90.00th=[21890], 95.00th=[22414], 00:10:27.683 | 99.00th=[22676], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:10:27.683 | 99.99th=[22676] 00:10:27.683 bw ( KiB/s): min=12552, max=13408, per=26.33%, avg=12980.00, stdev=605.28, samples=2 00:10:27.683 iops : min= 3138, max= 3352, avg=3245.00, stdev=151.32, samples=2 00:10:27.683 lat (msec) : 4=0.19%, 10=0.50%, 20=62.31%, 50=37.01% 00:10:27.683 cpu : usr=3.19%, sys=9.97%, ctx=202, majf=0, minf=9 00:10:27.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:27.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.683 issued rwts: total=3072,3372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.683 job1: (groupid=0, jobs=1): err= 0: pid=78333: Tue Dec 17 00:26:13 2024 00:10:27.683 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:27.683 slat (usec): min=6, max=5181, avg=152.75, stdev=760.66 00:10:27.683 clat (usec): min=15145, max=22309, avg=20225.30, stdev=936.25 00:10:27.683 lat (usec): min=18949, max=22334, avg=20378.06, stdev=552.10 00:10:27.683 clat percentiles (usec): 00:10:27.683 | 1.00th=[15664], 5.00th=[19530], 10.00th=[19792], 20.00th=[19792], 00:10:27.683 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:10:27.683 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21103], 95.00th=[21365], 00:10:27.683 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:10:27.683 | 99.99th=[22414] 00:10:27.683 write: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1003msec); 0 zone resets 00:10:27.683 slat (usec): min=17, max=5633, avg=149.46, stdev=695.30 00:10:27.683 clat (usec): min=2620, max=21635, avg=19084.97, stdev=2154.69 00:10:27.683 lat (usec): min=2647, max=21659, avg=19234.43, stdev=2049.97 00:10:27.683 clat percentiles (usec): 00:10:27.683 | 1.00th=[ 7504], 5.00th=[15926], 10.00th=[18744], 20.00th=[19006], 00:10:27.683 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:10:27.683 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:10:27.683 | 99.00th=[21365], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:10:27.683 | 99.99th=[21627] 00:10:27.683 bw ( KiB/s): min=12320, max=13536, per=26.22%, avg=12928.00, stdev=859.84, samples=2 00:10:27.683 iops : min= 3080, max= 3384, avg=3232.00, stdev=214.96, samples=2 00:10:27.684 lat (msec) : 4=0.44%, 10=0.50%, 20=61.64%, 50=37.43% 00:10:27.684 cpu : usr=3.19%, sys=10.28%, ctx=217, majf=0, minf=11 00:10:27.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:27.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.684 issued rwts: total=3072,3356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.684 job2: (groupid=0, jobs=1): err= 0: pid=78334: Tue Dec 17 00:26:13 2024 00:10:27.684 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:27.684 slat (usec): min=6, max=9501, avg=154.84, stdev=666.18 00:10:27.684 clat (usec): min=4669, max=34420, avg=19675.81, stdev=4047.82 00:10:27.684 lat (usec): min=4686, max=36609, avg=19830.64, stdev=4098.16 00:10:27.684 clat percentiles (usec): 00:10:27.684 | 1.00th=[12125], 5.00th=[15008], 10.00th=[15401], 20.00th=[15664], 00:10:27.684 | 30.00th=[16319], 40.00th=[17695], 50.00th=[19792], 60.00th=[22152], 00:10:27.684 | 70.00th=[22676], 80.00th=[22938], 90.00th=[23987], 95.00th=[26608], 00:10:27.684 | 99.00th=[29492], 99.50th=[30540], 99.90th=[31851], 99.95th=[31851], 00:10:27.684 | 99.99th=[34341] 00:10:27.684 write: IOPS=3073, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1004msec); 0 zone resets 00:10:27.684 slat (usec): min=11, max=9465, avg=161.75, stdev=608.77 00:10:27.684 clat (usec): min=651, max=41671, avg=21490.19, stdev=7125.72 00:10:27.684 lat (usec): min=4243, max=41752, avg=21651.94, stdev=7180.98 00:10:27.684 clat percentiles (usec): 00:10:27.684 | 1.00th=[12256], 5.00th=[12387], 10.00th=[13042], 20.00th=[13304], 00:10:27.684 | 30.00th=[14484], 40.00th=[22152], 50.00th=[22938], 60.00th=[23725], 00:10:27.684 | 70.00th=[23987], 80.00th=[25560], 90.00th=[29754], 95.00th=[35390], 00:10:27.684 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:27.684 | 99.99th=[41681] 00:10:27.684 bw ( KiB/s): min=12288, max=12312, per=24.95%, avg=12300.00, stdev=16.97, samples=2 00:10:27.684 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:27.684 lat (usec) : 750=0.02% 00:10:27.684 lat (msec) : 10=0.68%, 20=43.41%, 50=55.89% 00:10:27.684 cpu : usr=3.19%, sys=9.97%, ctx=371, majf=0, minf=6 00:10:27.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:27.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.684 issued rwts: total=3072,3086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.684 job3: (groupid=0, jobs=1): err= 0: pid=78335: Tue Dec 17 00:26:13 2024 00:10:27.684 read: IOPS=2354, BW=9418KiB/s (9644kB/s)(9456KiB/1004msec) 00:10:27.684 slat (usec): min=6, max=7461, avg=169.53, stdev=692.22 00:10:27.684 clat (usec): min=760, max=51995, avg=20439.32, stdev=5305.51 00:10:27.684 lat (usec): min=4547, max=52012, avg=20608.86, stdev=5351.15 00:10:27.684 clat percentiles (usec): 00:10:27.684 | 1.00th=[ 8356], 5.00th=[13960], 10.00th=[15008], 20.00th=[15533], 00:10:27.684 | 30.00th=[16909], 40.00th=[19530], 50.00th=[21890], 60.00th=[22414], 00:10:27.684 | 70.00th=[22676], 80.00th=[22938], 90.00th=[25560], 95.00th=[27395], 00:10:27.684 | 99.00th=[38011], 99.50th=[46924], 99.90th=[52167], 99.95th=[52167], 00:10:27.684 | 99.99th=[52167] 00:10:27.684 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:10:27.684 slat (usec): min=13, max=9887, avg=225.96, stdev=740.45 00:10:27.684 clat (usec): min=15325, max=63274, avg=30405.09, stdev=11327.40 00:10:27.684 lat (usec): min=15354, max=63298, avg=30631.05, stdev=11401.79 00:10:27.684 clat percentiles (usec): 00:10:27.684 | 1.00th=[18220], 5.00th=[18482], 10.00th=[21890], 20.00th=[22938], 00:10:27.684 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[26608], 00:10:27.684 | 70.00th=[32900], 80.00th=[41157], 90.00th=[49021], 95.00th=[56886], 00:10:27.684 | 99.00th=[61604], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:10:27.684 | 99.99th=[63177] 00:10:27.684 bw ( KiB/s): min=10156, max=10344, per=20.79%, avg=10250.00, stdev=132.94, samples=2 00:10:27.684 iops : min= 2539, max= 2586, avg=2562.50, stdev=33.23, samples=2 00:10:27.684 lat (usec) : 1000=0.02% 00:10:27.684 lat (msec) : 10=0.85%, 20=23.27%, 50=70.92%, 100=4.94% 00:10:27.684 cpu : usr=2.29%, sys=9.17%, ctx=377, majf=0, minf=11 00:10:27.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:10:27.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.684 issued rwts: total=2364,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.684 00:10:27.684 Run status group 0 (all jobs): 00:10:27.684 READ: bw=45.1MiB/s (47.2MB/s), 9418KiB/s-12.0MiB/s (9644kB/s-12.5MB/s), io=45.2MiB (47.4MB), run=1003-1004msec 00:10:27.684 WRITE: bw=48.1MiB/s (50.5MB/s), 9.96MiB/s-13.1MiB/s (10.4MB/s-13.8MB/s), io=48.3MiB (50.7MB), run=1003-1004msec 00:10:27.684 00:10:27.684 Disk stats (read/write): 00:10:27.684 nvme0n1: ios=2610/2976, merge=0/0, ticks=12358/12503, in_queue=24861, util=87.98% 00:10:27.684 nvme0n2: ios=2608/2944, merge=0/0, ticks=11897/12739, in_queue=24636, util=89.56% 00:10:27.684 nvme0n3: ios=2560/2759, merge=0/0, ticks=16289/17624, in_queue=33913, util=89.26% 00:10:27.684 nvme0n4: ios=2048/2167, merge=0/0, ticks=13434/21315, in_queue=34749, util=89.63% 00:10:27.684 00:26:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:27.684 [global] 00:10:27.684 thread=1 00:10:27.684 invalidate=1 00:10:27.684 rw=randwrite 00:10:27.684 time_based=1 00:10:27.684 runtime=1 00:10:27.684 ioengine=libaio 00:10:27.684 direct=1 00:10:27.684 bs=4096 00:10:27.684 iodepth=128 00:10:27.684 norandommap=0 00:10:27.684 numjobs=1 00:10:27.684 00:10:27.684 verify_dump=1 00:10:27.684 verify_backlog=512 00:10:27.684 verify_state_save=0 00:10:27.684 do_verify=1 00:10:27.684 verify=crc32c-intel 00:10:27.684 [job0] 00:10:27.684 filename=/dev/nvme0n1 00:10:27.684 [job1] 00:10:27.684 filename=/dev/nvme0n2 00:10:27.684 [job2] 00:10:27.684 filename=/dev/nvme0n3 00:10:27.684 [job3] 00:10:27.684 filename=/dev/nvme0n4 00:10:27.684 Could not set queue depth (nvme0n1) 00:10:27.684 Could not set queue depth (nvme0n2) 00:10:27.684 Could not set queue depth (nvme0n3) 00:10:27.684 Could not set queue depth (nvme0n4) 00:10:27.684 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.684 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.684 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.684 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.684 fio-3.35 00:10:27.684 Starting 4 threads 00:10:29.061 00:10:29.061 job0: (groupid=0, jobs=1): err= 0: pid=78390: Tue Dec 17 00:26:14 2024 00:10:29.061 read: IOPS=3519, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1009msec) 00:10:29.061 slat (usec): min=7, max=15346, avg=160.95, stdev=765.09 00:10:29.061 clat (usec): min=1686, max=83507, avg=20460.94, stdev=16248.34 00:10:29.061 lat (usec): min=5492, max=83522, avg=20621.90, stdev=16365.63 00:10:29.061 clat percentiles (usec): 00:10:29.061 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[11600], 20.00th=[11994], 00:10:29.061 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:10:29.061 | 70.00th=[13829], 80.00th=[30016], 90.00th=[50594], 95.00th=[60556], 00:10:29.061 | 99.00th=[71828], 99.50th=[76022], 99.90th=[83362], 99.95th=[83362], 00:10:29.061 | 99.99th=[83362] 00:10:29.061 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:10:29.061 slat (usec): min=10, max=9669, avg=112.14, stdev=601.65 00:10:29.061 clat (usec): min=6332, max=64008, avg=15144.48, stdev=9584.56 00:10:29.061 lat (usec): min=6373, max=64030, avg=15256.62, stdev=9649.62 00:10:29.061 clat percentiles (usec): 00:10:29.061 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10552], 20.00th=[11076], 00:10:29.061 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:10:29.061 | 70.00th=[12387], 80.00th=[13566], 90.00th=[27919], 95.00th=[36963], 00:10:29.061 | 99.00th=[57410], 99.50th=[61080], 99.90th=[64226], 99.95th=[64226], 00:10:29.061 | 99.99th=[64226] 00:10:29.061 bw ( KiB/s): min= 7808, max=20864, per=24.54%, avg=14336.00, stdev=9231.99, samples=2 00:10:29.061 iops : min= 1952, max= 5216, avg=3584.00, stdev=2308.00, samples=2 00:10:29.061 lat (msec) : 2=0.01%, 10=3.03%, 20=77.95%, 50=12.59%, 100=6.42% 00:10:29.061 cpu : usr=3.27%, sys=9.52%, ctx=400, majf=0, minf=13 00:10:29.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:29.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.061 issued rwts: total=3551,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.061 job1: (groupid=0, jobs=1): err= 0: pid=78391: Tue Dec 17 00:26:14 2024 00:10:29.061 read: IOPS=5803, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1004msec) 00:10:29.061 slat (usec): min=7, max=6236, avg=79.78, stdev=491.29 00:10:29.061 clat (usec): min=1135, max=19293, avg=11183.72, stdev=1627.95 00:10:29.061 lat (usec): min=4576, max=22882, avg=11263.50, stdev=1646.84 00:10:29.061 clat percentiles (usec): 00:10:29.061 | 1.00th=[ 5407], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:10:29.061 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:10:29.061 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12780], 95.00th=[13173], 00:10:29.061 | 99.00th=[16319], 99.50th=[17957], 99.90th=[19268], 99.95th=[19268], 00:10:29.061 | 99.99th=[19268] 00:10:29.061 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:10:29.061 slat (usec): min=6, max=7789, avg=80.31, stdev=459.66 00:10:29.061 clat (usec): min=5063, max=14911, avg=10116.10, stdev=1301.91 00:10:29.061 lat (usec): min=6747, max=14951, avg=10196.42, stdev=1240.45 00:10:29.061 clat percentiles (usec): 00:10:29.061 | 1.00th=[ 6718], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9241], 00:10:29.061 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10290], 00:10:29.061 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11863], 95.00th=[12256], 00:10:29.061 | 99.00th=[14615], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:10:29.061 | 99.99th=[14877] 00:10:29.061 bw ( KiB/s): min=22888, max=26264, per=42.07%, avg=24576.00, stdev=2387.19, samples=2 00:10:29.061 iops : min= 5722, max= 6566, avg=6144.00, stdev=596.80, samples=2 00:10:29.061 lat (msec) : 2=0.01%, 10=35.61%, 20=64.38% 00:10:29.061 cpu : usr=5.28%, sys=15.05%, ctx=254, majf=0, minf=10 00:10:29.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:29.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.061 issued rwts: total=5827,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.061 job2: (groupid=0, jobs=1): err= 0: pid=78392: Tue Dec 17 00:26:14 2024 00:10:29.061 read: IOPS=2086, BW=8347KiB/s (8548kB/s)(8364KiB/1002msec) 00:10:29.061 slat (usec): min=6, max=10901, avg=204.81, stdev=890.64 00:10:29.061 clat (usec): min=1057, max=50137, avg=24297.29, stdev=5814.67 00:10:29.061 lat (usec): min=4768, max=50157, avg=24502.10, stdev=5890.47 00:10:29.061 clat percentiles (usec): 00:10:29.061 | 1.00th=[10945], 5.00th=[18220], 10.00th=[19006], 20.00th=[20579], 00:10:29.061 | 30.00th=[21365], 40.00th=[22414], 50.00th=[23200], 60.00th=[23462], 00:10:29.061 | 70.00th=[25035], 80.00th=[28181], 90.00th=[32637], 95.00th=[35914], 00:10:29.061 | 99.00th=[43254], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:10:29.061 | 99.99th=[50070] 00:10:29.061 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:10:29.061 slat (usec): min=11, max=6802, avg=215.28, stdev=850.49 00:10:29.061 clat (usec): min=11322, max=65280, avg=29562.20, stdev=14406.80 00:10:29.061 lat (usec): min=11350, max=65304, avg=29777.49, stdev=14509.45 00:10:29.061 clat percentiles (usec): 00:10:29.062 | 1.00th=[13435], 5.00th=[14746], 10.00th=[15664], 20.00th=[16188], 00:10:29.062 | 30.00th=[16581], 40.00th=[20055], 50.00th=[22676], 60.00th=[31065], 00:10:29.062 | 70.00th=[41157], 80.00th=[45351], 90.00th=[50070], 95.00th=[55313], 00:10:29.062 | 99.00th=[61604], 99.50th=[64750], 99.90th=[65274], 99.95th=[65274], 00:10:29.062 | 99.99th=[65274] 00:10:29.062 bw ( KiB/s): min= 8192, max=11608, per=16.95%, avg=9900.00, stdev=2415.48, samples=2 00:10:29.062 iops : min= 2048, max= 2902, avg=2475.00, stdev=603.87, samples=2 00:10:29.062 lat (msec) : 2=0.02%, 10=0.13%, 20=27.89%, 50=66.29%, 100=5.68% 00:10:29.062 cpu : usr=2.00%, sys=8.19%, ctx=264, majf=0, minf=17 00:10:29.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:10:29.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.062 issued rwts: total=2091,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.062 job3: (groupid=0, jobs=1): err= 0: pid=78393: Tue Dec 17 00:26:14 2024 00:10:29.062 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:10:29.062 slat (usec): min=8, max=13758, avg=233.85, stdev=1078.23 00:10:29.062 clat (usec): min=14712, max=65332, avg=29767.24, stdev=12322.77 00:10:29.062 lat (usec): min=17889, max=69023, avg=30001.09, stdev=12391.33 00:10:29.062 clat percentiles (usec): 00:10:29.062 | 1.00th=[16319], 5.00th=[18482], 10.00th=[20055], 20.00th=[20317], 00:10:29.062 | 30.00th=[20579], 40.00th=[21103], 50.00th=[25035], 60.00th=[30540], 00:10:29.062 | 70.00th=[31065], 80.00th=[40633], 90.00th=[48497], 95.00th=[57410], 00:10:29.062 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:10:29.062 | 99.99th=[65274] 00:10:29.062 write: IOPS=2442, BW=9768KiB/s (10.0MB/s)(9788KiB/1002msec); 0 zone resets 00:10:29.062 slat (usec): min=14, max=10603, avg=204.96, stdev=873.28 00:10:29.062 clat (usec): min=885, max=71647, avg=26316.39, stdev=11102.04 00:10:29.062 lat (usec): min=4297, max=71673, avg=26521.34, stdev=11138.42 00:10:29.062 clat percentiles (usec): 00:10:29.062 | 1.00th=[12911], 5.00th=[15926], 10.00th=[16057], 20.00th=[17957], 00:10:29.062 | 30.00th=[20317], 40.00th=[21627], 50.00th=[23987], 60.00th=[25297], 00:10:29.062 | 70.00th=[28967], 80.00th=[30278], 90.00th=[43254], 95.00th=[51643], 00:10:29.062 | 99.00th=[63701], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:10:29.062 | 99.99th=[71828] 00:10:29.062 bw ( KiB/s): min= 6264, max=12312, per=15.90%, avg=9288.00, stdev=4276.58, samples=2 00:10:29.062 iops : min= 1566, max= 3078, avg=2322.00, stdev=1069.15, samples=2 00:10:29.062 lat (usec) : 1000=0.02% 00:10:29.062 lat (msec) : 10=0.18%, 20=19.11%, 50=73.35%, 100=7.34% 00:10:29.062 cpu : usr=2.10%, sys=7.59%, ctx=327, majf=0, minf=13 00:10:29.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:29.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.062 issued rwts: total=2048,2447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.062 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.062 00:10:29.062 Run status group 0 (all jobs): 00:10:29.062 READ: bw=52.3MiB/s (54.9MB/s), 8176KiB/s-22.7MiB/s (8372kB/s-23.8MB/s), io=52.8MiB (55.4MB), run=1002-1009msec 00:10:29.062 WRITE: bw=57.0MiB/s (59.8MB/s), 9768KiB/s-23.9MiB/s (10.0MB/s-25.1MB/s), io=57.6MiB (60.4MB), run=1002-1009msec 00:10:29.062 00:10:29.062 Disk stats (read/write): 00:10:29.062 nvme0n1: ios=3291/3584, merge=0/0, ticks=21040/18355, in_queue=39395, util=86.67% 00:10:29.062 nvme0n2: ios=4908/5120, merge=0/0, ticks=51806/48168, in_queue=99974, util=88.74% 00:10:29.062 nvme0n3: ios=1576/2048, merge=0/0, ticks=13556/20628, in_queue=34184, util=88.89% 00:10:29.062 nvme0n4: ios=2048/2078, merge=0/0, ticks=14729/10637, in_queue=25366, util=89.44% 00:10:29.062 00:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:29.062 00:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=78406 00:10:29.062 00:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:29.062 00:26:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:29.062 [global] 00:10:29.062 thread=1 00:10:29.062 invalidate=1 00:10:29.062 rw=read 00:10:29.062 time_based=1 00:10:29.062 runtime=10 00:10:29.062 ioengine=libaio 00:10:29.062 direct=1 00:10:29.062 bs=4096 00:10:29.062 iodepth=1 00:10:29.062 norandommap=1 00:10:29.062 numjobs=1 00:10:29.062 00:10:29.062 [job0] 00:10:29.062 filename=/dev/nvme0n1 00:10:29.062 [job1] 00:10:29.062 filename=/dev/nvme0n2 00:10:29.062 [job2] 00:10:29.062 filename=/dev/nvme0n3 00:10:29.062 [job3] 00:10:29.062 filename=/dev/nvme0n4 00:10:29.062 Could not set queue depth (nvme0n1) 00:10:29.062 Could not set queue depth (nvme0n2) 00:10:29.062 Could not set queue depth (nvme0n3) 00:10:29.062 Could not set queue depth (nvme0n4) 00:10:29.062 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.062 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.062 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.062 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.062 fio-3.35 00:10:29.062 Starting 4 threads 00:10:32.343 00:26:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:32.343 fio: pid=78449, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:32.343 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=38907904, buflen=4096 00:10:32.343 00:26:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:32.343 fio: pid=78448, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:32.343 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44085248, buflen=4096 00:10:32.343 00:26:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:32.343 00:26:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:32.601 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9641984, buflen=4096 00:10:32.601 fio: pid=78446, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:32.602 00:26:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:32.602 00:26:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:32.861 fio: pid=78447, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:32.861 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15069184, buflen=4096 00:10:33.120 00:10:33.120 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78446: Tue Dec 17 00:26:18 2024 00:10:33.120 read: IOPS=5350, BW=20.9MiB/s (21.9MB/s)(73.2MiB/3502msec) 00:10:33.120 slat (usec): min=7, max=15430, avg=15.78, stdev=185.37 00:10:33.120 clat (usec): min=129, max=3402, avg=169.87, stdev=50.60 00:10:33.120 lat (usec): min=140, max=15949, avg=185.65, stdev=194.40 00:10:33.120 clat percentiles (usec): 00:10:33.120 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:33.120 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:33.120 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 208], 95.00th=[ 233], 00:10:33.120 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 627], 99.95th=[ 816], 00:10:33.120 | 99.99th=[ 3032] 00:10:33.120 bw ( KiB/s): min=20968, max=23080, per=35.95%, avg=22441.33, stdev=781.91, samples=6 00:10:33.120 iops : min= 5242, max= 5770, avg=5610.33, stdev=195.48, samples=6 00:10:33.120 lat (usec) : 250=98.29%, 500=1.58%, 750=0.06%, 1000=0.02% 00:10:33.120 lat (msec) : 2=0.04%, 4=0.01% 00:10:33.120 cpu : usr=1.31%, sys=6.54%, ctx=18749, majf=0, minf=1 00:10:33.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 issued rwts: total=18739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.120 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78447: Tue Dec 17 00:26:18 2024 00:10:33.120 read: IOPS=5300, BW=20.7MiB/s (21.7MB/s)(78.4MiB/3785msec) 00:10:33.120 slat (usec): min=7, max=9751, avg=15.47, stdev=140.22 00:10:33.120 clat (usec): min=104, max=3354, avg=172.04, stdev=56.67 00:10:33.120 lat (usec): min=138, max=9939, avg=187.52, stdev=152.52 00:10:33.120 clat percentiles (usec): 00:10:33.120 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:10:33.120 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:10:33.120 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 206], 95.00th=[ 231], 00:10:33.120 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 775], 99.95th=[ 1287], 00:10:33.120 | 99.99th=[ 2147] 00:10:33.120 bw ( KiB/s): min=16332, max=22728, per=34.09%, avg=21279.43, stdev=2269.84, samples=7 00:10:33.120 iops : min= 4083, max= 5682, avg=5319.86, stdev=567.46, samples=7 00:10:33.120 lat (usec) : 250=98.00%, 500=1.85%, 750=0.03%, 1000=0.04% 00:10:33.120 lat (msec) : 2=0.05%, 4=0.02% 00:10:33.120 cpu : usr=1.19%, sys=6.13%, ctx=20085, majf=0, minf=2 00:10:33.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 issued rwts: total=20064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.120 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78448: Tue Dec 17 00:26:18 2024 00:10:33.120 read: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(42.0MiB/3217msec) 00:10:33.120 slat (usec): min=11, max=7833, avg=16.43, stdev=104.83 00:10:33.120 clat (usec): min=144, max=7035, avg=281.06, stdev=143.03 00:10:33.120 lat (usec): min=157, max=8026, avg=297.49, stdev=176.90 00:10:33.120 clat percentiles (usec): 00:10:33.120 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 269], 00:10:33.120 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:10:33.120 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 326], 00:10:33.120 | 99.00th=[ 347], 99.50th=[ 416], 99.90th=[ 2180], 99.95th=[ 3982], 00:10:33.120 | 99.99th=[ 5866] 00:10:33.120 bw ( KiB/s): min=12360, max=13568, per=20.55%, avg=12829.33, stdev=431.01, samples=6 00:10:33.120 iops : min= 3090, max= 3392, avg=3207.33, stdev=107.75, samples=6 00:10:33.120 lat (usec) : 250=14.48%, 500=85.20%, 750=0.11%, 1000=0.05% 00:10:33.120 lat (msec) : 2=0.05%, 4=0.06%, 10=0.05% 00:10:33.120 cpu : usr=1.34%, sys=3.95%, ctx=10770, majf=0, minf=1 00:10:33.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 issued rwts: total=10764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.120 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78449: Tue Dec 17 00:26:18 2024 00:10:33.120 read: IOPS=3252, BW=12.7MiB/s (13.3MB/s)(37.1MiB/2921msec) 00:10:33.120 slat (usec): min=12, max=318, avg=17.46, stdev= 5.95 00:10:33.120 clat (usec): min=146, max=2518, avg=288.27, stdev=46.44 00:10:33.120 lat (usec): min=160, max=2533, avg=305.73, stdev=46.51 00:10:33.120 clat percentiles (usec): 00:10:33.120 | 1.00th=[ 172], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 273], 00:10:33.120 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:10:33.120 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 326], 00:10:33.120 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 545], 99.95th=[ 791], 00:10:33.120 | 99.99th=[ 2507] 00:10:33.120 bw ( KiB/s): min=12944, max=13240, per=20.90%, avg=13048.00, stdev=127.37, samples=5 00:10:33.120 iops : min= 3236, max= 3310, avg=3262.00, stdev=31.84, samples=5 00:10:33.120 lat (usec) : 250=3.76%, 500=96.09%, 750=0.08%, 1000=0.02% 00:10:33.120 lat (msec) : 2=0.01%, 4=0.02% 00:10:33.120 cpu : usr=1.13%, sys=4.79%, ctx=9500, majf=0, minf=2 00:10:33.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.120 issued rwts: total=9500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.120 00:10:33.120 Run status group 0 (all jobs): 00:10:33.120 READ: bw=61.0MiB/s (63.9MB/s), 12.7MiB/s-20.9MiB/s (13.3MB/s-21.9MB/s), io=231MiB (242MB), run=2921-3785msec 00:10:33.120 00:10:33.120 Disk stats (read/write): 00:10:33.120 nvme0n1: ios=18205/0, merge=0/0, ticks=3079/0, in_queue=3079, util=94.91% 00:10:33.120 nvme0n2: ios=19081/0, merge=0/0, ticks=3340/0, in_queue=3340, util=95.56% 00:10:33.120 nvme0n3: ios=10235/0, merge=0/0, ticks=2949/0, in_queue=2949, util=96.06% 00:10:33.120 nvme0n4: ios=9332/0, merge=0/0, ticks=2755/0, in_queue=2755, util=96.66% 00:10:33.120 00:26:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.120 00:26:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:33.379 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.379 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:33.637 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.637 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:33.895 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.895 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:34.152 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.153 00:26:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 78406 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.411 nvmf hotplug test: fio failed as expected 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:34.411 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.713 rmmod nvme_tcp 00:10:34.713 rmmod nvme_fabrics 00:10:34.713 rmmod nvme_keyring 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 78030 ']' 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 78030 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 78030 ']' 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 78030 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78030 00:10:34.713 killing process with pid 78030 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78030' 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 78030 00:10:34.713 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 78030 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:34.972 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:35.231 00:26:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:35.231 ************************************ 00:10:35.231 END TEST nvmf_fio_target 00:10:35.231 ************************************ 00:10:35.231 00:10:35.231 real 0m19.353s 00:10:35.231 user 1m12.243s 00:10:35.231 sys 0m10.338s 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.231 ************************************ 00:10:35.231 START TEST nvmf_bdevio 00:10:35.231 ************************************ 00:10:35.231 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:35.492 * Looking for test storage... 00:10:35.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:35.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.492 --rc genhtml_branch_coverage=1 00:10:35.492 --rc genhtml_function_coverage=1 00:10:35.492 --rc genhtml_legend=1 00:10:35.492 --rc geninfo_all_blocks=1 00:10:35.492 --rc geninfo_unexecuted_blocks=1 00:10:35.492 00:10:35.492 ' 00:10:35.492 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:35.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.492 --rc genhtml_branch_coverage=1 00:10:35.492 --rc genhtml_function_coverage=1 00:10:35.492 --rc genhtml_legend=1 00:10:35.492 --rc geninfo_all_blocks=1 00:10:35.492 --rc geninfo_unexecuted_blocks=1 00:10:35.492 00:10:35.492 ' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.493 --rc genhtml_branch_coverage=1 00:10:35.493 --rc genhtml_function_coverage=1 00:10:35.493 --rc genhtml_legend=1 00:10:35.493 --rc geninfo_all_blocks=1 00:10:35.493 --rc geninfo_unexecuted_blocks=1 00:10:35.493 00:10:35.493 ' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.493 --rc genhtml_branch_coverage=1 00:10:35.493 --rc genhtml_function_coverage=1 00:10:35.493 --rc genhtml_legend=1 00:10:35.493 --rc geninfo_all_blocks=1 00:10:35.493 --rc geninfo_unexecuted_blocks=1 00:10:35.493 00:10:35.493 ' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.493 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.493 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:35.494 Cannot find device "nvmf_init_br" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:35.494 Cannot find device "nvmf_init_br2" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:35.494 Cannot find device "nvmf_tgt_br" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.494 Cannot find device "nvmf_tgt_br2" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:35.494 Cannot find device "nvmf_init_br" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:35.494 Cannot find device "nvmf_init_br2" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:35.494 Cannot find device "nvmf_tgt_br" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:35.494 Cannot find device "nvmf_tgt_br2" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:35.494 Cannot find device "nvmf_br" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:35.494 Cannot find device "nvmf_init_if" 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:35.494 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:35.752 Cannot find device "nvmf_init_if2" 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:35.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:10:35.753 00:10:35.753 --- 10.0.0.3 ping statistics --- 00:10:35.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.753 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:35.753 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:36.012 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:36.012 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:10:36.012 00:10:36.012 --- 10.0.0.4 ping statistics --- 00:10:36.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.012 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:36.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:36.012 00:10:36.012 --- 10.0.0.1 ping statistics --- 00:10:36.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.012 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:36.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:10:36.012 00:10:36.012 --- 10.0.0.2 ping statistics --- 00:10:36.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.012 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=78774 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 78774 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 78774 ']' 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.012 00:26:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.012 [2024-12-17 00:26:21.843995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:36.012 [2024-12-17 00:26:21.844084] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.012 [2024-12-17 00:26:21.980588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.270 [2024-12-17 00:26:22.025696] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.270 [2024-12-17 00:26:22.025765] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.270 [2024-12-17 00:26:22.025784] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.270 [2024-12-17 00:26:22.025794] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.270 [2024-12-17 00:26:22.025802] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.270 [2024-12-17 00:26:22.025981] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:36.270 [2024-12-17 00:26:22.026148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:36.270 [2024-12-17 00:26:22.026705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:36.270 [2024-12-17 00:26:22.026714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.270 [2024-12-17 00:26:22.060675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 [2024-12-17 00:26:22.823452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.836 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 Malloc0 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 [2024-12-17 00:26:22.870863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:37.094 { 00:10:37.094 "params": { 00:10:37.094 "name": "Nvme$subsystem", 00:10:37.094 "trtype": "$TEST_TRANSPORT", 00:10:37.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:37.094 "adrfam": "ipv4", 00:10:37.094 "trsvcid": "$NVMF_PORT", 00:10:37.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:37.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:37.094 "hdgst": ${hdgst:-false}, 00:10:37.094 "ddgst": ${ddgst:-false} 00:10:37.094 }, 00:10:37.094 "method": "bdev_nvme_attach_controller" 00:10:37.094 } 00:10:37.094 EOF 00:10:37.094 )") 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:37.094 00:26:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:37.094 "params": { 00:10:37.094 "name": "Nvme1", 00:10:37.094 "trtype": "tcp", 00:10:37.094 "traddr": "10.0.0.3", 00:10:37.094 "adrfam": "ipv4", 00:10:37.094 "trsvcid": "4420", 00:10:37.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:37.094 "hdgst": false, 00:10:37.094 "ddgst": false 00:10:37.094 }, 00:10:37.094 "method": "bdev_nvme_attach_controller" 00:10:37.094 }' 00:10:37.094 [2024-12-17 00:26:22.932677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:37.094 [2024-12-17 00:26:22.932765] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78810 ] 00:10:37.094 [2024-12-17 00:26:23.072569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.353 [2024-12-17 00:26:23.115506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.353 [2024-12-17 00:26:23.115637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.353 [2024-12-17 00:26:23.115852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.353 [2024-12-17 00:26:23.156338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.353 I/O targets: 00:10:37.353 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:37.353 00:10:37.353 00:10:37.353 CUnit - A unit testing framework for C - Version 2.1-3 00:10:37.353 http://cunit.sourceforge.net/ 00:10:37.353 00:10:37.353 00:10:37.353 Suite: bdevio tests on: Nvme1n1 00:10:37.353 Test: blockdev write read block ...passed 00:10:37.353 Test: blockdev write zeroes read block ...passed 00:10:37.353 Test: blockdev write zeroes read no split ...passed 00:10:37.353 Test: blockdev write zeroes read split ...passed 00:10:37.353 Test: blockdev write zeroes read split partial ...passed 00:10:37.353 Test: blockdev reset ...[2024-12-17 00:26:23.290275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:37.353 [2024-12-17 00:26:23.290556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7050d0 (9): Bad file descriptor 00:10:37.353 [2024-12-17 00:26:23.307315] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:37.353 passed 00:10:37.353 Test: blockdev write read 8 blocks ...passed 00:10:37.353 Test: blockdev write read size > 128k ...passed 00:10:37.353 Test: blockdev write read invalid size ...passed 00:10:37.353 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:37.353 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:37.353 Test: blockdev write read max offset ...passed 00:10:37.353 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:37.353 Test: blockdev writev readv 8 blocks ...passed 00:10:37.353 Test: blockdev writev readv 30 x 1block ...passed 00:10:37.353 Test: blockdev writev readv block ...passed 00:10:37.353 Test: blockdev writev readv size > 128k ...passed 00:10:37.353 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:37.353 Test: blockdev comparev and writev ...[2024-12-17 00:26:23.315197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.353 [2024-12-17 00:26:23.315257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:37.353 [2024-12-17 00:26:23.315293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.353 [2024-12-17 00:26:23.315321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:37.353 [2024-12-17 00:26:23.315643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.353 [2024-12-17 00:26:23.315664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:37.353 [2024-12-17 00:26:23.315684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.353 [2024-12-17 00:26:23.315697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:37.353 [2024-12-17 00:26:23.315982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.353 [2024-12-17 00:26:23.316001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:37.353 [2024-12-17 00:26:23.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.353 [2024-12-17 00:26:23.316033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:37.353 [2024-12-17 00:26:23.316361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.353 [2024-12-17 00:26:23.316383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:37.353 [2024-12-17 00:26:23.316404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:37.354 [2024-12-17 00:26:23.316417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:37.354 passed 00:10:37.354 Test: blockdev nvme passthru rw ...passed 00:10:37.354 Test: blockdev nvme passthru vendor specific ...[2024-12-17 00:26:23.317482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.354 [2024-12-17 00:26:23.317516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:37.354 [2024-12-17 00:26:23.317637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.354 [2024-12-17 00:26:23.317656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:37.354 [2024-12-17 00:26:23.317772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.354 [2024-12-17 00:26:23.317797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:37.354 [2024-12-17 00:26:23.317915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:37.354 passed 00:10:37.354 Test: blockdev nvme admin passthru ...[2024-12-17 00:26:23.317934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:37.354 passed 00:10:37.354 Test: blockdev copy ...passed 00:10:37.354 00:10:37.354 Run Summary: Type Total Ran Passed Failed Inactive 00:10:37.354 suites 1 1 n/a 0 0 00:10:37.354 tests 23 23 23 0 0 00:10:37.354 asserts 152 152 152 0 n/a 00:10:37.354 00:10:37.354 Elapsed time = 0.144 seconds 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.612 rmmod nvme_tcp 00:10:37.612 rmmod nvme_fabrics 00:10:37.612 rmmod nvme_keyring 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 78774 ']' 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 78774 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 78774 ']' 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 78774 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.612 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78774 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78774' 00:10:37.871 killing process with pid 78774 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 78774 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 78774 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:37.871 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.129 00:26:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.129 00:26:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:38.129 00:10:38.129 real 0m2.848s 00:10:38.129 user 0m8.283s 00:10:38.129 sys 0m0.771s 00:10:38.129 00:26:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.129 ************************************ 00:10:38.129 END TEST nvmf_bdevio 00:10:38.129 ************************************ 00:10:38.129 00:26:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 00:26:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:38.129 00:10:38.129 real 2m28.665s 00:10:38.129 user 6m28.864s 00:10:38.129 sys 0m52.377s 00:10:38.129 00:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.129 ************************************ 00:10:38.129 00:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 END TEST nvmf_target_core 00:10:38.129 ************************************ 00:10:38.129 00:26:24 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:38.129 00:26:24 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.129 00:26:24 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.129 00:26:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.129 ************************************ 00:10:38.129 START TEST nvmf_target_extra 00:10:38.129 ************************************ 00:10:38.129 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:38.389 * Looking for test storage... 00:10:38.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.389 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.390 --rc genhtml_branch_coverage=1 00:10:38.390 --rc genhtml_function_coverage=1 00:10:38.390 --rc genhtml_legend=1 00:10:38.390 --rc geninfo_all_blocks=1 00:10:38.390 --rc geninfo_unexecuted_blocks=1 00:10:38.390 00:10:38.390 ' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.390 --rc genhtml_branch_coverage=1 00:10:38.390 --rc genhtml_function_coverage=1 00:10:38.390 --rc genhtml_legend=1 00:10:38.390 --rc geninfo_all_blocks=1 00:10:38.390 --rc geninfo_unexecuted_blocks=1 00:10:38.390 00:10:38.390 ' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.390 --rc genhtml_branch_coverage=1 00:10:38.390 --rc genhtml_function_coverage=1 00:10:38.390 --rc genhtml_legend=1 00:10:38.390 --rc geninfo_all_blocks=1 00:10:38.390 --rc geninfo_unexecuted_blocks=1 00:10:38.390 00:10:38.390 ' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.390 --rc genhtml_branch_coverage=1 00:10:38.390 --rc genhtml_function_coverage=1 00:10:38.390 --rc genhtml_legend=1 00:10:38.390 --rc geninfo_all_blocks=1 00:10:38.390 --rc geninfo_unexecuted_blocks=1 00:10:38.390 00:10:38.390 ' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.390 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.390 ************************************ 00:10:38.390 START TEST nvmf_auth_target 00:10:38.390 ************************************ 00:10:38.390 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:38.651 * Looking for test storage... 00:10:38.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.651 --rc genhtml_branch_coverage=1 00:10:38.651 --rc genhtml_function_coverage=1 00:10:38.651 --rc genhtml_legend=1 00:10:38.651 --rc geninfo_all_blocks=1 00:10:38.651 --rc geninfo_unexecuted_blocks=1 00:10:38.651 00:10:38.651 ' 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.651 --rc genhtml_branch_coverage=1 00:10:38.651 --rc genhtml_function_coverage=1 00:10:38.651 --rc genhtml_legend=1 00:10:38.651 --rc geninfo_all_blocks=1 00:10:38.651 --rc geninfo_unexecuted_blocks=1 00:10:38.651 00:10:38.651 ' 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.651 --rc genhtml_branch_coverage=1 00:10:38.651 --rc genhtml_function_coverage=1 00:10:38.651 --rc genhtml_legend=1 00:10:38.651 --rc geninfo_all_blocks=1 00:10:38.651 --rc geninfo_unexecuted_blocks=1 00:10:38.651 00:10:38.651 ' 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.651 --rc genhtml_branch_coverage=1 00:10:38.651 --rc genhtml_function_coverage=1 00:10:38.651 --rc genhtml_legend=1 00:10:38.651 --rc geninfo_all_blocks=1 00:10:38.651 --rc geninfo_unexecuted_blocks=1 00:10:38.651 00:10:38.651 ' 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.651 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.652 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:38.652 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:38.653 Cannot find device "nvmf_init_br" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:38.653 Cannot find device "nvmf_init_br2" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:38.653 Cannot find device "nvmf_tgt_br" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.653 Cannot find device "nvmf_tgt_br2" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:38.653 Cannot find device "nvmf_init_br" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:38.653 Cannot find device "nvmf_init_br2" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:38.653 Cannot find device "nvmf_tgt_br" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:38.653 Cannot find device "nvmf_tgt_br2" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:38.653 Cannot find device "nvmf_br" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:38.653 Cannot find device "nvmf_init_if" 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:38.653 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:38.653 Cannot find device "nvmf_init_if2" 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:38.912 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:38.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:38.913 00:10:38.913 --- 10.0.0.3 ping statistics --- 00:10:38.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.913 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:38.913 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:38.913 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:10:38.913 00:10:38.913 --- 10.0.0.4 ping statistics --- 00:10:38.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.913 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:38.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:38.913 00:10:38.913 --- 10.0.0.1 ping statistics --- 00:10:38.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.913 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:38.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:10:38.913 00:10:38.913 --- 10.0.0.2 ping statistics --- 00:10:38.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.913 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:38.913 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=79089 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 79089 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79089 ']' 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.172 00:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.116 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.116 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:40.116 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:40.116 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.116 00:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=79121 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=51b95940a171504421e0f51b7e7e4cb9daec5455c4b2b3f9 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.3oO 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 51b95940a171504421e0f51b7e7e4cb9daec5455c4b2b3f9 0 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 51b95940a171504421e0f51b7e7e4cb9daec5455c4b2b3f9 0 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=51b95940a171504421e0f51b7e7e4cb9daec5455c4b2b3f9 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.3oO 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.3oO 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.3oO 00:10:40.116 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f0f196610eec89df81201679539e72f38d9e60416517b23d3782dfde1af7e612 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.t5g 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f0f196610eec89df81201679539e72f38d9e60416517b23d3782dfde1af7e612 3 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f0f196610eec89df81201679539e72f38d9e60416517b23d3782dfde1af7e612 3 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f0f196610eec89df81201679539e72f38d9e60416517b23d3782dfde1af7e612 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:10:40.117 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.t5g 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.t5g 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.t5g 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=78f5b88ed93c57ef2ddad44e204b8586 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Qit 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 78f5b88ed93c57ef2ddad44e204b8586 1 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 78f5b88ed93c57ef2ddad44e204b8586 1 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:40.376 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=78f5b88ed93c57ef2ddad44e204b8586 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Qit 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Qit 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Qit 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=738c6f463d4efd8d5161c31fc0ee372bf65d89337b690654 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.CJd 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 738c6f463d4efd8d5161c31fc0ee372bf65d89337b690654 2 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 738c6f463d4efd8d5161c31fc0ee372bf65d89337b690654 2 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=738c6f463d4efd8d5161c31fc0ee372bf65d89337b690654 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.CJd 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.CJd 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.CJd 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=393f42978dfce1f21d40bb9a11e498da2c8de1f3c22784db 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Y05 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 393f42978dfce1f21d40bb9a11e498da2c8de1f3c22784db 2 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 393f42978dfce1f21d40bb9a11e498da2c8de1f3c22784db 2 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=393f42978dfce1f21d40bb9a11e498da2c8de1f3c22784db 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Y05 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Y05 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Y05 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=ed948f148e04fdef50c2e0749e9f70de 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.RLD 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key ed948f148e04fdef50c2e0749e9f70de 1 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 ed948f148e04fdef50c2e0749e9f70de 1 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=ed948f148e04fdef50c2e0749e9f70de 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:10:40.377 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.RLD 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.RLD 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.RLD 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c65d88fd3d4c4be1c63751db35521c0e22fd0cc43dda2de55209c1cbcca2bbcd 00:10:40.636 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.1nW 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c65d88fd3d4c4be1c63751db35521c0e22fd0cc43dda2de55209c1cbcca2bbcd 3 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c65d88fd3d4c4be1c63751db35521c0e22fd0cc43dda2de55209c1cbcca2bbcd 3 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c65d88fd3d4c4be1c63751db35521c0e22fd0cc43dda2de55209c1cbcca2bbcd 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.1nW 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.1nW 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.1nW 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 79089 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79089 ']' 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.637 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 79121 /var/tmp/host.sock 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79121 ']' 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.896 00:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3oO 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.3oO 00:10:41.155 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.3oO 00:10:41.414 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.t5g ]] 00:10:41.414 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t5g 00:10:41.414 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.414 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.414 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.414 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t5g 00:10:41.414 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t5g 00:10:41.673 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:41.673 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Qit 00:10:41.673 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.673 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.673 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.673 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Qit 00:10:41.673 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Qit 00:10:41.932 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.CJd ]] 00:10:41.932 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CJd 00:10:41.932 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.932 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.191 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.191 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CJd 00:10:42.191 00:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CJd 00:10:42.191 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:42.191 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Y05 00:10:42.191 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.449 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.449 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.449 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Y05 00:10:42.449 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Y05 00:10:42.708 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.RLD ]] 00:10:42.708 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RLD 00:10:42.708 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.708 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.708 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.708 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RLD 00:10:42.708 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RLD 00:10:42.967 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:42.967 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1nW 00:10:42.967 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.967 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.967 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.967 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1nW 00:10:42.967 00:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1nW 00:10:43.226 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:43.226 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:43.226 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.226 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.226 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:43.226 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.485 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.745 00:10:43.745 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.745 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.745 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.012 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.012 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.012 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.012 00:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.012 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.012 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.012 { 00:10:44.012 "cntlid": 1, 00:10:44.013 "qid": 0, 00:10:44.013 "state": "enabled", 00:10:44.013 "thread": "nvmf_tgt_poll_group_000", 00:10:44.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:10:44.013 "listen_address": { 00:10:44.013 "trtype": "TCP", 00:10:44.013 "adrfam": "IPv4", 00:10:44.013 "traddr": "10.0.0.3", 00:10:44.013 "trsvcid": "4420" 00:10:44.013 }, 00:10:44.013 "peer_address": { 00:10:44.013 "trtype": "TCP", 00:10:44.013 "adrfam": "IPv4", 00:10:44.013 "traddr": "10.0.0.1", 00:10:44.013 "trsvcid": "37646" 00:10:44.013 }, 00:10:44.013 "auth": { 00:10:44.013 "state": "completed", 00:10:44.013 "digest": "sha256", 00:10:44.013 "dhgroup": "null" 00:10:44.013 } 00:10:44.013 } 00:10:44.013 ]' 00:10:44.013 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.271 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.271 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.271 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:44.271 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.271 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.271 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.271 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.530 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:10:44.530 00:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:48.720 00:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.294 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.554 00:10:49.554 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.554 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.554 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.813 { 00:10:49.813 "cntlid": 3, 00:10:49.813 "qid": 0, 00:10:49.813 "state": "enabled", 00:10:49.813 "thread": "nvmf_tgt_poll_group_000", 00:10:49.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:10:49.813 "listen_address": { 00:10:49.813 "trtype": "TCP", 00:10:49.813 "adrfam": "IPv4", 00:10:49.813 "traddr": "10.0.0.3", 00:10:49.813 "trsvcid": "4420" 00:10:49.813 }, 00:10:49.813 "peer_address": { 00:10:49.813 "trtype": "TCP", 00:10:49.813 "adrfam": "IPv4", 00:10:49.813 "traddr": "10.0.0.1", 00:10:49.813 "trsvcid": "37662" 00:10:49.813 }, 00:10:49.813 "auth": { 00:10:49.813 "state": "completed", 00:10:49.813 "digest": "sha256", 00:10:49.813 "dhgroup": "null" 00:10:49.813 } 00:10:49.813 } 00:10:49.813 ]' 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:49.813 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.072 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.072 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.072 00:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.332 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:10:50.332 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:50.899 00:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.158 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.725 00:10:51.725 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.725 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.725 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.983 { 00:10:51.983 "cntlid": 5, 00:10:51.983 "qid": 0, 00:10:51.983 "state": "enabled", 00:10:51.983 "thread": "nvmf_tgt_poll_group_000", 00:10:51.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:10:51.983 "listen_address": { 00:10:51.983 "trtype": "TCP", 00:10:51.983 "adrfam": "IPv4", 00:10:51.983 "traddr": "10.0.0.3", 00:10:51.983 "trsvcid": "4420" 00:10:51.983 }, 00:10:51.983 "peer_address": { 00:10:51.983 "trtype": "TCP", 00:10:51.983 "adrfam": "IPv4", 00:10:51.983 "traddr": "10.0.0.1", 00:10:51.983 "trsvcid": "37698" 00:10:51.983 }, 00:10:51.983 "auth": { 00:10:51.983 "state": "completed", 00:10:51.983 "digest": "sha256", 00:10:51.983 "dhgroup": "null" 00:10:51.983 } 00:10:51.983 } 00:10:51.983 ]' 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.983 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.984 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:51.984 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.984 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.984 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.984 00:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.242 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:10:52.242 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:53.177 00:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:53.436 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:53.695 00:10:53.695 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.695 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.695 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.262 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.262 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.262 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.262 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.262 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.262 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.262 { 00:10:54.262 "cntlid": 7, 00:10:54.262 "qid": 0, 00:10:54.262 "state": "enabled", 00:10:54.262 "thread": "nvmf_tgt_poll_group_000", 00:10:54.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:10:54.262 "listen_address": { 00:10:54.262 "trtype": "TCP", 00:10:54.262 "adrfam": "IPv4", 00:10:54.262 "traddr": "10.0.0.3", 00:10:54.262 "trsvcid": "4420" 00:10:54.262 }, 00:10:54.262 "peer_address": { 00:10:54.262 "trtype": "TCP", 00:10:54.262 "adrfam": "IPv4", 00:10:54.262 "traddr": "10.0.0.1", 00:10:54.262 "trsvcid": "50016" 00:10:54.262 }, 00:10:54.262 "auth": { 00:10:54.262 "state": "completed", 00:10:54.262 "digest": "sha256", 00:10:54.262 "dhgroup": "null" 00:10:54.262 } 00:10:54.262 } 00:10:54.262 ]' 00:10:54.262 00:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.262 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.262 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.262 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:54.262 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.262 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.262 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.262 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.520 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:10:54.520 00:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:10:55.455 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.455 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:55.456 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.456 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.456 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.456 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:55.456 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.456 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:55.456 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.714 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.972 00:10:55.972 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.972 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.972 00:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.231 { 00:10:56.231 "cntlid": 9, 00:10:56.231 "qid": 0, 00:10:56.231 "state": "enabled", 00:10:56.231 "thread": "nvmf_tgt_poll_group_000", 00:10:56.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:10:56.231 "listen_address": { 00:10:56.231 "trtype": "TCP", 00:10:56.231 "adrfam": "IPv4", 00:10:56.231 "traddr": "10.0.0.3", 00:10:56.231 "trsvcid": "4420" 00:10:56.231 }, 00:10:56.231 "peer_address": { 00:10:56.231 "trtype": "TCP", 00:10:56.231 "adrfam": "IPv4", 00:10:56.231 "traddr": "10.0.0.1", 00:10:56.231 "trsvcid": "50046" 00:10:56.231 }, 00:10:56.231 "auth": { 00:10:56.231 "state": "completed", 00:10:56.231 "digest": "sha256", 00:10:56.231 "dhgroup": "ffdhe2048" 00:10:56.231 } 00:10:56.231 } 00:10:56.231 ]' 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.231 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.518 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:56.518 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.518 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.518 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.518 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.776 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:10:56.776 00:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.711 00:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.277 00:10:58.277 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.277 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.277 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.535 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.535 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.535 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.535 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.535 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.535 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.535 { 00:10:58.535 "cntlid": 11, 00:10:58.535 "qid": 0, 00:10:58.535 "state": "enabled", 00:10:58.535 "thread": "nvmf_tgt_poll_group_000", 00:10:58.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:10:58.536 "listen_address": { 00:10:58.536 "trtype": "TCP", 00:10:58.536 "adrfam": "IPv4", 00:10:58.536 "traddr": "10.0.0.3", 00:10:58.536 "trsvcid": "4420" 00:10:58.536 }, 00:10:58.536 "peer_address": { 00:10:58.536 "trtype": "TCP", 00:10:58.536 "adrfam": "IPv4", 00:10:58.536 "traddr": "10.0.0.1", 00:10:58.536 "trsvcid": "50072" 00:10:58.536 }, 00:10:58.536 "auth": { 00:10:58.536 "state": "completed", 00:10:58.536 "digest": "sha256", 00:10:58.536 "dhgroup": "ffdhe2048" 00:10:58.536 } 00:10:58.536 } 00:10:58.536 ]' 00:10:58.536 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.536 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.536 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.536 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:58.536 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.794 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.794 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.794 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.052 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:10:59.052 00:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:59.616 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.182 00:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.441 00:11:00.441 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.441 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.441 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.700 { 00:11:00.700 "cntlid": 13, 00:11:00.700 "qid": 0, 00:11:00.700 "state": "enabled", 00:11:00.700 "thread": "nvmf_tgt_poll_group_000", 00:11:00.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:00.700 "listen_address": { 00:11:00.700 "trtype": "TCP", 00:11:00.700 "adrfam": "IPv4", 00:11:00.700 "traddr": "10.0.0.3", 00:11:00.700 "trsvcid": "4420" 00:11:00.700 }, 00:11:00.700 "peer_address": { 00:11:00.700 "trtype": "TCP", 00:11:00.700 "adrfam": "IPv4", 00:11:00.700 "traddr": "10.0.0.1", 00:11:00.700 "trsvcid": "50094" 00:11:00.700 }, 00:11:00.700 "auth": { 00:11:00.700 "state": "completed", 00:11:00.700 "digest": "sha256", 00:11:00.700 "dhgroup": "ffdhe2048" 00:11:00.700 } 00:11:00.700 } 00:11:00.700 ]' 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:00.700 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.959 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.959 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.959 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.218 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:01.218 00:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:01.786 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.045 00:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.304 00:11:02.304 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.304 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.304 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.562 { 00:11:02.562 "cntlid": 15, 00:11:02.562 "qid": 0, 00:11:02.562 "state": "enabled", 00:11:02.562 "thread": "nvmf_tgt_poll_group_000", 00:11:02.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:02.562 "listen_address": { 00:11:02.562 "trtype": "TCP", 00:11:02.562 "adrfam": "IPv4", 00:11:02.562 "traddr": "10.0.0.3", 00:11:02.562 "trsvcid": "4420" 00:11:02.562 }, 00:11:02.562 "peer_address": { 00:11:02.562 "trtype": "TCP", 00:11:02.562 "adrfam": "IPv4", 00:11:02.562 "traddr": "10.0.0.1", 00:11:02.562 "trsvcid": "48344" 00:11:02.562 }, 00:11:02.562 "auth": { 00:11:02.562 "state": "completed", 00:11:02.562 "digest": "sha256", 00:11:02.562 "dhgroup": "ffdhe2048" 00:11:02.562 } 00:11:02.562 } 00:11:02.562 ]' 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.562 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.820 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:02.820 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.820 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.820 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.820 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.078 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:03.078 00:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:03.644 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.902 00:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.468 00:11:04.468 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.468 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.468 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.727 { 00:11:04.727 "cntlid": 17, 00:11:04.727 "qid": 0, 00:11:04.727 "state": "enabled", 00:11:04.727 "thread": "nvmf_tgt_poll_group_000", 00:11:04.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:04.727 "listen_address": { 00:11:04.727 "trtype": "TCP", 00:11:04.727 "adrfam": "IPv4", 00:11:04.727 "traddr": "10.0.0.3", 00:11:04.727 "trsvcid": "4420" 00:11:04.727 }, 00:11:04.727 "peer_address": { 00:11:04.727 "trtype": "TCP", 00:11:04.727 "adrfam": "IPv4", 00:11:04.727 "traddr": "10.0.0.1", 00:11:04.727 "trsvcid": "48366" 00:11:04.727 }, 00:11:04.727 "auth": { 00:11:04.727 "state": "completed", 00:11:04.727 "digest": "sha256", 00:11:04.727 "dhgroup": "ffdhe3072" 00:11:04.727 } 00:11:04.727 } 00:11:04.727 ]' 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.727 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.985 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:04.985 00:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.921 00:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.488 00:11:06.488 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.488 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.488 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.746 { 00:11:06.746 "cntlid": 19, 00:11:06.746 "qid": 0, 00:11:06.746 "state": "enabled", 00:11:06.746 "thread": "nvmf_tgt_poll_group_000", 00:11:06.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:06.746 "listen_address": { 00:11:06.746 "trtype": "TCP", 00:11:06.746 "adrfam": "IPv4", 00:11:06.746 "traddr": "10.0.0.3", 00:11:06.746 "trsvcid": "4420" 00:11:06.746 }, 00:11:06.746 "peer_address": { 00:11:06.746 "trtype": "TCP", 00:11:06.746 "adrfam": "IPv4", 00:11:06.746 "traddr": "10.0.0.1", 00:11:06.746 "trsvcid": "48392" 00:11:06.746 }, 00:11:06.746 "auth": { 00:11:06.746 "state": "completed", 00:11:06.746 "digest": "sha256", 00:11:06.746 "dhgroup": "ffdhe3072" 00:11:06.746 } 00:11:06.746 } 00:11:06.746 ]' 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.746 00:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.313 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:07.313 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.880 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.139 00:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.139 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.139 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.139 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.139 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.397 00:11:08.397 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.397 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.397 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.656 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.656 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.656 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.656 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.914 { 00:11:08.914 "cntlid": 21, 00:11:08.914 "qid": 0, 00:11:08.914 "state": "enabled", 00:11:08.914 "thread": "nvmf_tgt_poll_group_000", 00:11:08.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:08.914 "listen_address": { 00:11:08.914 "trtype": "TCP", 00:11:08.914 "adrfam": "IPv4", 00:11:08.914 "traddr": "10.0.0.3", 00:11:08.914 "trsvcid": "4420" 00:11:08.914 }, 00:11:08.914 "peer_address": { 00:11:08.914 "trtype": "TCP", 00:11:08.914 "adrfam": "IPv4", 00:11:08.914 "traddr": "10.0.0.1", 00:11:08.914 "trsvcid": "48428" 00:11:08.914 }, 00:11:08.914 "auth": { 00:11:08.914 "state": "completed", 00:11:08.914 "digest": "sha256", 00:11:08.914 "dhgroup": "ffdhe3072" 00:11:08.914 } 00:11:08.914 } 00:11:08.914 ]' 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.914 00:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.173 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:09.173 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.107 00:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:10.366 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:10.624 00:11:10.624 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.624 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.624 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.883 { 00:11:10.883 "cntlid": 23, 00:11:10.883 "qid": 0, 00:11:10.883 "state": "enabled", 00:11:10.883 "thread": "nvmf_tgt_poll_group_000", 00:11:10.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:10.883 "listen_address": { 00:11:10.883 "trtype": "TCP", 00:11:10.883 "adrfam": "IPv4", 00:11:10.883 "traddr": "10.0.0.3", 00:11:10.883 "trsvcid": "4420" 00:11:10.883 }, 00:11:10.883 "peer_address": { 00:11:10.883 "trtype": "TCP", 00:11:10.883 "adrfam": "IPv4", 00:11:10.883 "traddr": "10.0.0.1", 00:11:10.883 "trsvcid": "48468" 00:11:10.883 }, 00:11:10.883 "auth": { 00:11:10.883 "state": "completed", 00:11:10.883 "digest": "sha256", 00:11:10.883 "dhgroup": "ffdhe3072" 00:11:10.883 } 00:11:10.883 } 00:11:10.883 ]' 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.883 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.141 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.141 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:11.141 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.141 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.141 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.141 00:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.400 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:11.400 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.014 00:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.286 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.546 00:11:12.546 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.546 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.546 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.114 { 00:11:13.114 "cntlid": 25, 00:11:13.114 "qid": 0, 00:11:13.114 "state": "enabled", 00:11:13.114 "thread": "nvmf_tgt_poll_group_000", 00:11:13.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:13.114 "listen_address": { 00:11:13.114 "trtype": "TCP", 00:11:13.114 "adrfam": "IPv4", 00:11:13.114 "traddr": "10.0.0.3", 00:11:13.114 "trsvcid": "4420" 00:11:13.114 }, 00:11:13.114 "peer_address": { 00:11:13.114 "trtype": "TCP", 00:11:13.114 "adrfam": "IPv4", 00:11:13.114 "traddr": "10.0.0.1", 00:11:13.114 "trsvcid": "57526" 00:11:13.114 }, 00:11:13.114 "auth": { 00:11:13.114 "state": "completed", 00:11:13.114 "digest": "sha256", 00:11:13.114 "dhgroup": "ffdhe4096" 00:11:13.114 } 00:11:13.114 } 00:11:13.114 ]' 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:13.114 00:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.114 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.114 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.114 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.373 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:13.373 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.311 00:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.311 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.879 00:11:14.879 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.879 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.879 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.138 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.138 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.138 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.138 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.138 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.138 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.138 { 00:11:15.138 "cntlid": 27, 00:11:15.138 "qid": 0, 00:11:15.138 "state": "enabled", 00:11:15.138 "thread": "nvmf_tgt_poll_group_000", 00:11:15.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:15.138 "listen_address": { 00:11:15.138 "trtype": "TCP", 00:11:15.138 "adrfam": "IPv4", 00:11:15.138 "traddr": "10.0.0.3", 00:11:15.138 "trsvcid": "4420" 00:11:15.138 }, 00:11:15.138 "peer_address": { 00:11:15.138 "trtype": "TCP", 00:11:15.138 "adrfam": "IPv4", 00:11:15.138 "traddr": "10.0.0.1", 00:11:15.138 "trsvcid": "57560" 00:11:15.138 }, 00:11:15.138 "auth": { 00:11:15.138 "state": "completed", 00:11:15.138 "digest": "sha256", 00:11:15.138 "dhgroup": "ffdhe4096" 00:11:15.138 } 00:11:15.138 } 00:11:15.138 ]' 00:11:15.138 00:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.138 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.138 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.138 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:15.138 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.138 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.138 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.138 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.397 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:15.397 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.334 00:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.334 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.902 00:11:16.902 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.902 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.902 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.160 { 00:11:17.160 "cntlid": 29, 00:11:17.160 "qid": 0, 00:11:17.160 "state": "enabled", 00:11:17.160 "thread": "nvmf_tgt_poll_group_000", 00:11:17.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:17.160 "listen_address": { 00:11:17.160 "trtype": "TCP", 00:11:17.160 "adrfam": "IPv4", 00:11:17.160 "traddr": "10.0.0.3", 00:11:17.160 "trsvcid": "4420" 00:11:17.160 }, 00:11:17.160 "peer_address": { 00:11:17.160 "trtype": "TCP", 00:11:17.160 "adrfam": "IPv4", 00:11:17.160 "traddr": "10.0.0.1", 00:11:17.160 "trsvcid": "57588" 00:11:17.160 }, 00:11:17.160 "auth": { 00:11:17.160 "state": "completed", 00:11:17.160 "digest": "sha256", 00:11:17.160 "dhgroup": "ffdhe4096" 00:11:17.160 } 00:11:17.160 } 00:11:17.160 ]' 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.160 00:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.160 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.160 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.160 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.160 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.160 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.419 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:17.419 00:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.356 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.925 00:11:18.925 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.925 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.925 00:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.184 { 00:11:19.184 "cntlid": 31, 00:11:19.184 "qid": 0, 00:11:19.184 "state": "enabled", 00:11:19.184 "thread": "nvmf_tgt_poll_group_000", 00:11:19.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:19.184 "listen_address": { 00:11:19.184 "trtype": "TCP", 00:11:19.184 "adrfam": "IPv4", 00:11:19.184 "traddr": "10.0.0.3", 00:11:19.184 "trsvcid": "4420" 00:11:19.184 }, 00:11:19.184 "peer_address": { 00:11:19.184 "trtype": "TCP", 00:11:19.184 "adrfam": "IPv4", 00:11:19.184 "traddr": "10.0.0.1", 00:11:19.184 "trsvcid": "57614" 00:11:19.184 }, 00:11:19.184 "auth": { 00:11:19.184 "state": "completed", 00:11:19.184 "digest": "sha256", 00:11:19.184 "dhgroup": "ffdhe4096" 00:11:19.184 } 00:11:19.184 } 00:11:19.184 ]' 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.184 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.442 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.442 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.442 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.442 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.442 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.701 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:19.701 00:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.269 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.528 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.097 00:11:21.097 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.097 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.097 00:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.356 { 00:11:21.356 "cntlid": 33, 00:11:21.356 "qid": 0, 00:11:21.356 "state": "enabled", 00:11:21.356 "thread": "nvmf_tgt_poll_group_000", 00:11:21.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:21.356 "listen_address": { 00:11:21.356 "trtype": "TCP", 00:11:21.356 "adrfam": "IPv4", 00:11:21.356 "traddr": "10.0.0.3", 00:11:21.356 "trsvcid": "4420" 00:11:21.356 }, 00:11:21.356 "peer_address": { 00:11:21.356 "trtype": "TCP", 00:11:21.356 "adrfam": "IPv4", 00:11:21.356 "traddr": "10.0.0.1", 00:11:21.356 "trsvcid": "57632" 00:11:21.356 }, 00:11:21.356 "auth": { 00:11:21.356 "state": "completed", 00:11:21.356 "digest": "sha256", 00:11:21.356 "dhgroup": "ffdhe6144" 00:11:21.356 } 00:11:21.356 } 00:11:21.356 ]' 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.356 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.615 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:21.615 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.615 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.615 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.615 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.874 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:21.874 00:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.442 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.702 00:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.270 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.270 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.270 { 00:11:23.270 "cntlid": 35, 00:11:23.270 "qid": 0, 00:11:23.270 "state": "enabled", 00:11:23.270 "thread": "nvmf_tgt_poll_group_000", 00:11:23.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:23.270 "listen_address": { 00:11:23.270 "trtype": "TCP", 00:11:23.270 "adrfam": "IPv4", 00:11:23.270 "traddr": "10.0.0.3", 00:11:23.270 "trsvcid": "4420" 00:11:23.270 }, 00:11:23.270 "peer_address": { 00:11:23.270 "trtype": "TCP", 00:11:23.270 "adrfam": "IPv4", 00:11:23.270 "traddr": "10.0.0.1", 00:11:23.270 "trsvcid": "50308" 00:11:23.270 }, 00:11:23.270 "auth": { 00:11:23.270 "state": "completed", 00:11:23.270 "digest": "sha256", 00:11:23.270 "dhgroup": "ffdhe6144" 00:11:23.270 } 00:11:23.270 } 00:11:23.270 ]' 00:11:23.528 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.528 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.528 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.529 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.529 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.529 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.529 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.529 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.788 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:23.788 00:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.357 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.632 00:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.215 00:11:25.215 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.215 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.215 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.474 { 00:11:25.474 "cntlid": 37, 00:11:25.474 "qid": 0, 00:11:25.474 "state": "enabled", 00:11:25.474 "thread": "nvmf_tgt_poll_group_000", 00:11:25.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:25.474 "listen_address": { 00:11:25.474 "trtype": "TCP", 00:11:25.474 "adrfam": "IPv4", 00:11:25.474 "traddr": "10.0.0.3", 00:11:25.474 "trsvcid": "4420" 00:11:25.474 }, 00:11:25.474 "peer_address": { 00:11:25.474 "trtype": "TCP", 00:11:25.474 "adrfam": "IPv4", 00:11:25.474 "traddr": "10.0.0.1", 00:11:25.474 "trsvcid": "50336" 00:11:25.474 }, 00:11:25.474 "auth": { 00:11:25.474 "state": "completed", 00:11:25.474 "digest": "sha256", 00:11:25.474 "dhgroup": "ffdhe6144" 00:11:25.474 } 00:11:25.474 } 00:11:25.474 ]' 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.474 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.041 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:26.041 00:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.608 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:26.867 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:26.867 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.867 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:26.867 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:26.867 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:26.867 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.868 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:26.868 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.868 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.868 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.868 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:26.868 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.868 00:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.435 00:11:27.435 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.435 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.435 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.694 { 00:11:27.694 "cntlid": 39, 00:11:27.694 "qid": 0, 00:11:27.694 "state": "enabled", 00:11:27.694 "thread": "nvmf_tgt_poll_group_000", 00:11:27.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:27.694 "listen_address": { 00:11:27.694 "trtype": "TCP", 00:11:27.694 "adrfam": "IPv4", 00:11:27.694 "traddr": "10.0.0.3", 00:11:27.694 "trsvcid": "4420" 00:11:27.694 }, 00:11:27.694 "peer_address": { 00:11:27.694 "trtype": "TCP", 00:11:27.694 "adrfam": "IPv4", 00:11:27.694 "traddr": "10.0.0.1", 00:11:27.694 "trsvcid": "50352" 00:11:27.694 }, 00:11:27.694 "auth": { 00:11:27.694 "state": "completed", 00:11:27.694 "digest": "sha256", 00:11:27.694 "dhgroup": "ffdhe6144" 00:11:27.694 } 00:11:27.694 } 00:11:27.694 ]' 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.694 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.953 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:27.953 00:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:28.520 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.086 00:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.655 00:11:29.655 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.655 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.655 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.913 { 00:11:29.913 "cntlid": 41, 00:11:29.913 "qid": 0, 00:11:29.913 "state": "enabled", 00:11:29.913 "thread": "nvmf_tgt_poll_group_000", 00:11:29.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:29.913 "listen_address": { 00:11:29.913 "trtype": "TCP", 00:11:29.913 "adrfam": "IPv4", 00:11:29.913 "traddr": "10.0.0.3", 00:11:29.913 "trsvcid": "4420" 00:11:29.913 }, 00:11:29.913 "peer_address": { 00:11:29.913 "trtype": "TCP", 00:11:29.913 "adrfam": "IPv4", 00:11:29.913 "traddr": "10.0.0.1", 00:11:29.913 "trsvcid": "50388" 00:11:29.913 }, 00:11:29.913 "auth": { 00:11:29.913 "state": "completed", 00:11:29.913 "digest": "sha256", 00:11:29.913 "dhgroup": "ffdhe8192" 00:11:29.913 } 00:11:29.913 } 00:11:29.913 ]' 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.913 00:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.171 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:30.171 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:31.105 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.105 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:31.105 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.105 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.105 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.105 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.105 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:31.106 00:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.106 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.672 00:11:31.930 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.930 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.930 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.188 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.188 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.188 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.188 00:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.188 { 00:11:32.188 "cntlid": 43, 00:11:32.188 "qid": 0, 00:11:32.188 "state": "enabled", 00:11:32.188 "thread": "nvmf_tgt_poll_group_000", 00:11:32.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:32.188 "listen_address": { 00:11:32.188 "trtype": "TCP", 00:11:32.188 "adrfam": "IPv4", 00:11:32.188 "traddr": "10.0.0.3", 00:11:32.188 "trsvcid": "4420" 00:11:32.188 }, 00:11:32.188 "peer_address": { 00:11:32.188 "trtype": "TCP", 00:11:32.188 "adrfam": "IPv4", 00:11:32.188 "traddr": "10.0.0.1", 00:11:32.188 "trsvcid": "50404" 00:11:32.188 }, 00:11:32.188 "auth": { 00:11:32.188 "state": "completed", 00:11:32.188 "digest": "sha256", 00:11:32.188 "dhgroup": "ffdhe8192" 00:11:32.188 } 00:11:32.188 } 00:11:32.188 ]' 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.188 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.446 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:32.446 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.013 00:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.271 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.272 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.272 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.838 00:11:33.838 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.838 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.838 00:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.403 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.403 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.403 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.403 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.403 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.403 { 00:11:34.403 "cntlid": 45, 00:11:34.403 "qid": 0, 00:11:34.403 "state": "enabled", 00:11:34.403 "thread": "nvmf_tgt_poll_group_000", 00:11:34.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:34.403 "listen_address": { 00:11:34.403 "trtype": "TCP", 00:11:34.403 "adrfam": "IPv4", 00:11:34.403 "traddr": "10.0.0.3", 00:11:34.403 "trsvcid": "4420" 00:11:34.403 }, 00:11:34.404 "peer_address": { 00:11:34.404 "trtype": "TCP", 00:11:34.404 "adrfam": "IPv4", 00:11:34.404 "traddr": "10.0.0.1", 00:11:34.404 "trsvcid": "49766" 00:11:34.404 }, 00:11:34.404 "auth": { 00:11:34.404 "state": "completed", 00:11:34.404 "digest": "sha256", 00:11:34.404 "dhgroup": "ffdhe8192" 00:11:34.404 } 00:11:34.404 } 00:11:34.404 ]' 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.404 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.662 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:34.662 00:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.284 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.542 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.543 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.108 00:11:36.108 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.108 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.108 00:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.366 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.366 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.366 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.366 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.366 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.366 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.366 { 00:11:36.366 "cntlid": 47, 00:11:36.366 "qid": 0, 00:11:36.366 "state": "enabled", 00:11:36.366 "thread": "nvmf_tgt_poll_group_000", 00:11:36.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:36.366 "listen_address": { 00:11:36.366 "trtype": "TCP", 00:11:36.366 "adrfam": "IPv4", 00:11:36.366 "traddr": "10.0.0.3", 00:11:36.366 "trsvcid": "4420" 00:11:36.366 }, 00:11:36.366 "peer_address": { 00:11:36.367 "trtype": "TCP", 00:11:36.367 "adrfam": "IPv4", 00:11:36.367 "traddr": "10.0.0.1", 00:11:36.367 "trsvcid": "49790" 00:11:36.367 }, 00:11:36.367 "auth": { 00:11:36.367 "state": "completed", 00:11:36.367 "digest": "sha256", 00:11:36.367 "dhgroup": "ffdhe8192" 00:11:36.367 } 00:11:36.367 } 00:11:36.367 ]' 00:11:36.367 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.367 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.367 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.367 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.625 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.625 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.625 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.625 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.883 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:36.883 00:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:37.448 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.706 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.965 00:11:37.965 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.965 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.965 00:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.223 { 00:11:38.223 "cntlid": 49, 00:11:38.223 "qid": 0, 00:11:38.223 "state": "enabled", 00:11:38.223 "thread": "nvmf_tgt_poll_group_000", 00:11:38.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:38.223 "listen_address": { 00:11:38.223 "trtype": "TCP", 00:11:38.223 "adrfam": "IPv4", 00:11:38.223 "traddr": "10.0.0.3", 00:11:38.223 "trsvcid": "4420" 00:11:38.223 }, 00:11:38.223 "peer_address": { 00:11:38.223 "trtype": "TCP", 00:11:38.223 "adrfam": "IPv4", 00:11:38.223 "traddr": "10.0.0.1", 00:11:38.223 "trsvcid": "49806" 00:11:38.223 }, 00:11:38.223 "auth": { 00:11:38.223 "state": "completed", 00:11:38.223 "digest": "sha384", 00:11:38.223 "dhgroup": "null" 00:11:38.223 } 00:11:38.223 } 00:11:38.223 ]' 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:38.223 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.481 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.481 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.481 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.740 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:38.740 00:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:39.308 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.566 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.824 00:11:39.824 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.824 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.824 00:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.083 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.083 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.083 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.083 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.083 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.083 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.083 { 00:11:40.083 "cntlid": 51, 00:11:40.083 "qid": 0, 00:11:40.083 "state": "enabled", 00:11:40.083 "thread": "nvmf_tgt_poll_group_000", 00:11:40.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:40.083 "listen_address": { 00:11:40.083 "trtype": "TCP", 00:11:40.083 "adrfam": "IPv4", 00:11:40.083 "traddr": "10.0.0.3", 00:11:40.083 "trsvcid": "4420" 00:11:40.083 }, 00:11:40.083 "peer_address": { 00:11:40.083 "trtype": "TCP", 00:11:40.083 "adrfam": "IPv4", 00:11:40.083 "traddr": "10.0.0.1", 00:11:40.083 "trsvcid": "49834" 00:11:40.083 }, 00:11:40.083 "auth": { 00:11:40.083 "state": "completed", 00:11:40.083 "digest": "sha384", 00:11:40.083 "dhgroup": "null" 00:11:40.083 } 00:11:40.083 } 00:11:40.083 ]' 00:11:40.083 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.341 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.341 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.341 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:40.341 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.341 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.341 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.341 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.600 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:40.600 00:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:41.166 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.425 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.684 00:11:41.684 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.684 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.684 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.943 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.943 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.943 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.943 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.943 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.943 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.943 { 00:11:41.943 "cntlid": 53, 00:11:41.943 "qid": 0, 00:11:41.943 "state": "enabled", 00:11:41.943 "thread": "nvmf_tgt_poll_group_000", 00:11:41.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:41.943 "listen_address": { 00:11:41.943 "trtype": "TCP", 00:11:41.943 "adrfam": "IPv4", 00:11:41.943 "traddr": "10.0.0.3", 00:11:41.943 "trsvcid": "4420" 00:11:41.943 }, 00:11:41.943 "peer_address": { 00:11:41.943 "trtype": "TCP", 00:11:41.943 "adrfam": "IPv4", 00:11:41.943 "traddr": "10.0.0.1", 00:11:41.943 "trsvcid": "49862" 00:11:41.943 }, 00:11:41.943 "auth": { 00:11:41.943 "state": "completed", 00:11:41.943 "digest": "sha384", 00:11:41.943 "dhgroup": "null" 00:11:41.943 } 00:11:41.943 } 00:11:41.943 ]' 00:11:41.943 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.201 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.201 00:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.201 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:42.201 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.201 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.201 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.201 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.459 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:42.460 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:43.026 00:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.285 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.544 00:11:43.544 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.544 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.544 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.803 { 00:11:43.803 "cntlid": 55, 00:11:43.803 "qid": 0, 00:11:43.803 "state": "enabled", 00:11:43.803 "thread": "nvmf_tgt_poll_group_000", 00:11:43.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:43.803 "listen_address": { 00:11:43.803 "trtype": "TCP", 00:11:43.803 "adrfam": "IPv4", 00:11:43.803 "traddr": "10.0.0.3", 00:11:43.803 "trsvcid": "4420" 00:11:43.803 }, 00:11:43.803 "peer_address": { 00:11:43.803 "trtype": "TCP", 00:11:43.803 "adrfam": "IPv4", 00:11:43.803 "traddr": "10.0.0.1", 00:11:43.803 "trsvcid": "34110" 00:11:43.803 }, 00:11:43.803 "auth": { 00:11:43.803 "state": "completed", 00:11:43.803 "digest": "sha384", 00:11:43.803 "dhgroup": "null" 00:11:43.803 } 00:11:43.803 } 00:11:43.803 ]' 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.803 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.061 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:44.061 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.061 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.061 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.061 00:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.320 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:44.320 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:44.886 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.145 00:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.430 00:11:45.430 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.430 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.430 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.705 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.705 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.705 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.705 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.705 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.705 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.705 { 00:11:45.705 "cntlid": 57, 00:11:45.705 "qid": 0, 00:11:45.705 "state": "enabled", 00:11:45.705 "thread": "nvmf_tgt_poll_group_000", 00:11:45.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:45.705 "listen_address": { 00:11:45.705 "trtype": "TCP", 00:11:45.705 "adrfam": "IPv4", 00:11:45.705 "traddr": "10.0.0.3", 00:11:45.705 "trsvcid": "4420" 00:11:45.705 }, 00:11:45.705 "peer_address": { 00:11:45.705 "trtype": "TCP", 00:11:45.705 "adrfam": "IPv4", 00:11:45.705 "traddr": "10.0.0.1", 00:11:45.705 "trsvcid": "34132" 00:11:45.705 }, 00:11:45.705 "auth": { 00:11:45.705 "state": "completed", 00:11:45.705 "digest": "sha384", 00:11:45.705 "dhgroup": "ffdhe2048" 00:11:45.705 } 00:11:45.705 } 00:11:45.705 ]' 00:11:45.705 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.964 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.964 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.964 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:45.964 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.964 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.964 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.964 00:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.223 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:46.223 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:46.790 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.790 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:46.790 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.790 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.049 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.049 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.049 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:47.049 00:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.049 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.615 00:11:47.615 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.615 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.616 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.874 { 00:11:47.874 "cntlid": 59, 00:11:47.874 "qid": 0, 00:11:47.874 "state": "enabled", 00:11:47.874 "thread": "nvmf_tgt_poll_group_000", 00:11:47.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:47.874 "listen_address": { 00:11:47.874 "trtype": "TCP", 00:11:47.874 "adrfam": "IPv4", 00:11:47.874 "traddr": "10.0.0.3", 00:11:47.874 "trsvcid": "4420" 00:11:47.874 }, 00:11:47.874 "peer_address": { 00:11:47.874 "trtype": "TCP", 00:11:47.874 "adrfam": "IPv4", 00:11:47.874 "traddr": "10.0.0.1", 00:11:47.874 "trsvcid": "34174" 00:11:47.874 }, 00:11:47.874 "auth": { 00:11:47.874 "state": "completed", 00:11:47.874 "digest": "sha384", 00:11:47.874 "dhgroup": "ffdhe2048" 00:11:47.874 } 00:11:47.874 } 00:11:47.874 ]' 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.874 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.875 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.875 00:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.134 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:48.134 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:48.700 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.959 00:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.536 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.536 { 00:11:49.536 "cntlid": 61, 00:11:49.536 "qid": 0, 00:11:49.536 "state": "enabled", 00:11:49.536 "thread": "nvmf_tgt_poll_group_000", 00:11:49.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:49.536 "listen_address": { 00:11:49.536 "trtype": "TCP", 00:11:49.536 "adrfam": "IPv4", 00:11:49.536 "traddr": "10.0.0.3", 00:11:49.536 "trsvcid": "4420" 00:11:49.536 }, 00:11:49.536 "peer_address": { 00:11:49.536 "trtype": "TCP", 00:11:49.536 "adrfam": "IPv4", 00:11:49.536 "traddr": "10.0.0.1", 00:11:49.536 "trsvcid": "34190" 00:11:49.536 }, 00:11:49.536 "auth": { 00:11:49.536 "state": "completed", 00:11:49.536 "digest": "sha384", 00:11:49.536 "dhgroup": "ffdhe2048" 00:11:49.536 } 00:11:49.536 } 00:11:49.536 ]' 00:11:49.536 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.795 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.795 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.795 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.795 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.795 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.795 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.795 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.054 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:50.054 00:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:50.621 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.879 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:50.879 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.879 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.879 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.879 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.879 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:50.879 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.138 00:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.396 00:11:51.396 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.396 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.396 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.654 { 00:11:51.654 "cntlid": 63, 00:11:51.654 "qid": 0, 00:11:51.654 "state": "enabled", 00:11:51.654 "thread": "nvmf_tgt_poll_group_000", 00:11:51.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:51.654 "listen_address": { 00:11:51.654 "trtype": "TCP", 00:11:51.654 "adrfam": "IPv4", 00:11:51.654 "traddr": "10.0.0.3", 00:11:51.654 "trsvcid": "4420" 00:11:51.654 }, 00:11:51.654 "peer_address": { 00:11:51.654 "trtype": "TCP", 00:11:51.654 "adrfam": "IPv4", 00:11:51.654 "traddr": "10.0.0.1", 00:11:51.654 "trsvcid": "34220" 00:11:51.654 }, 00:11:51.654 "auth": { 00:11:51.654 "state": "completed", 00:11:51.654 "digest": "sha384", 00:11:51.654 "dhgroup": "ffdhe2048" 00:11:51.654 } 00:11:51.654 } 00:11:51.654 ]' 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.654 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.913 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.913 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.913 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.913 00:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.172 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:52.172 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:52.739 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.997 00:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.564 00:11:53.564 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.564 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.564 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.822 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.822 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.823 { 00:11:53.823 "cntlid": 65, 00:11:53.823 "qid": 0, 00:11:53.823 "state": "enabled", 00:11:53.823 "thread": "nvmf_tgt_poll_group_000", 00:11:53.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:53.823 "listen_address": { 00:11:53.823 "trtype": "TCP", 00:11:53.823 "adrfam": "IPv4", 00:11:53.823 "traddr": "10.0.0.3", 00:11:53.823 "trsvcid": "4420" 00:11:53.823 }, 00:11:53.823 "peer_address": { 00:11:53.823 "trtype": "TCP", 00:11:53.823 "adrfam": "IPv4", 00:11:53.823 "traddr": "10.0.0.1", 00:11:53.823 "trsvcid": "52760" 00:11:53.823 }, 00:11:53.823 "auth": { 00:11:53.823 "state": "completed", 00:11:53.823 "digest": "sha384", 00:11:53.823 "dhgroup": "ffdhe3072" 00:11:53.823 } 00:11:53.823 } 00:11:53.823 ]' 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.823 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.081 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:54.082 00:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.648 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.907 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.171 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.171 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.171 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.171 00:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.436 00:11:55.436 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.436 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.436 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.710 { 00:11:55.710 "cntlid": 67, 00:11:55.710 "qid": 0, 00:11:55.710 "state": "enabled", 00:11:55.710 "thread": "nvmf_tgt_poll_group_000", 00:11:55.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:55.710 "listen_address": { 00:11:55.710 "trtype": "TCP", 00:11:55.710 "adrfam": "IPv4", 00:11:55.710 "traddr": "10.0.0.3", 00:11:55.710 "trsvcid": "4420" 00:11:55.710 }, 00:11:55.710 "peer_address": { 00:11:55.710 "trtype": "TCP", 00:11:55.710 "adrfam": "IPv4", 00:11:55.710 "traddr": "10.0.0.1", 00:11:55.710 "trsvcid": "52796" 00:11:55.710 }, 00:11:55.710 "auth": { 00:11:55.710 "state": "completed", 00:11:55.710 "digest": "sha384", 00:11:55.710 "dhgroup": "ffdhe3072" 00:11:55.710 } 00:11:55.710 } 00:11:55.710 ]' 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.710 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.981 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.981 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.981 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.981 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.981 00:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.239 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:56.239 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:11:56.805 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.805 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:56.805 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.805 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.806 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.806 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.806 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:56.806 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.064 00:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.321 00:11:57.579 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.579 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.579 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.837 { 00:11:57.837 "cntlid": 69, 00:11:57.837 "qid": 0, 00:11:57.837 "state": "enabled", 00:11:57.837 "thread": "nvmf_tgt_poll_group_000", 00:11:57.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:57.837 "listen_address": { 00:11:57.837 "trtype": "TCP", 00:11:57.837 "adrfam": "IPv4", 00:11:57.837 "traddr": "10.0.0.3", 00:11:57.837 "trsvcid": "4420" 00:11:57.837 }, 00:11:57.837 "peer_address": { 00:11:57.837 "trtype": "TCP", 00:11:57.837 "adrfam": "IPv4", 00:11:57.837 "traddr": "10.0.0.1", 00:11:57.837 "trsvcid": "52822" 00:11:57.837 }, 00:11:57.837 "auth": { 00:11:57.837 "state": "completed", 00:11:57.837 "digest": "sha384", 00:11:57.837 "dhgroup": "ffdhe3072" 00:11:57.837 } 00:11:57.837 } 00:11:57.837 ]' 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.837 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.838 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.838 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.838 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.838 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.838 00:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.096 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:58.096 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:59.030 00:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:59.292 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.293 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.553 00:11:59.553 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.553 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.553 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.811 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.811 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.811 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.811 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.811 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.811 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.811 { 00:11:59.811 "cntlid": 71, 00:11:59.811 "qid": 0, 00:11:59.811 "state": "enabled", 00:11:59.811 "thread": "nvmf_tgt_poll_group_000", 00:11:59.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:11:59.811 "listen_address": { 00:11:59.811 "trtype": "TCP", 00:11:59.811 "adrfam": "IPv4", 00:11:59.811 "traddr": "10.0.0.3", 00:11:59.811 "trsvcid": "4420" 00:11:59.811 }, 00:11:59.811 "peer_address": { 00:11:59.811 "trtype": "TCP", 00:11:59.811 "adrfam": "IPv4", 00:11:59.811 "traddr": "10.0.0.1", 00:11:59.811 "trsvcid": "52854" 00:11:59.811 }, 00:11:59.811 "auth": { 00:11:59.811 "state": "completed", 00:11:59.811 "digest": "sha384", 00:11:59.811 "dhgroup": "ffdhe3072" 00:11:59.811 } 00:11:59.811 } 00:11:59.811 ]' 00:11:59.811 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.070 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.070 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.070 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.070 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.070 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.070 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.070 00:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.328 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:00.328 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:00.895 00:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.154 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.722 00:12:01.722 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.722 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.722 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.980 { 00:12:01.980 "cntlid": 73, 00:12:01.980 "qid": 0, 00:12:01.980 "state": "enabled", 00:12:01.980 "thread": "nvmf_tgt_poll_group_000", 00:12:01.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:01.980 "listen_address": { 00:12:01.980 "trtype": "TCP", 00:12:01.980 "adrfam": "IPv4", 00:12:01.980 "traddr": "10.0.0.3", 00:12:01.980 "trsvcid": "4420" 00:12:01.980 }, 00:12:01.980 "peer_address": { 00:12:01.980 "trtype": "TCP", 00:12:01.980 "adrfam": "IPv4", 00:12:01.980 "traddr": "10.0.0.1", 00:12:01.980 "trsvcid": "52872" 00:12:01.980 }, 00:12:01.980 "auth": { 00:12:01.980 "state": "completed", 00:12:01.980 "digest": "sha384", 00:12:01.980 "dhgroup": "ffdhe4096" 00:12:01.980 } 00:12:01.980 } 00:12:01.980 ]' 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.980 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.981 00:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.548 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:02.548 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:03.115 00:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.374 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.632 00:12:03.890 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.890 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.890 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.148 { 00:12:04.148 "cntlid": 75, 00:12:04.148 "qid": 0, 00:12:04.148 "state": "enabled", 00:12:04.148 "thread": "nvmf_tgt_poll_group_000", 00:12:04.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:04.148 "listen_address": { 00:12:04.148 "trtype": "TCP", 00:12:04.148 "adrfam": "IPv4", 00:12:04.148 "traddr": "10.0.0.3", 00:12:04.148 "trsvcid": "4420" 00:12:04.148 }, 00:12:04.148 "peer_address": { 00:12:04.148 "trtype": "TCP", 00:12:04.148 "adrfam": "IPv4", 00:12:04.148 "traddr": "10.0.0.1", 00:12:04.148 "trsvcid": "42520" 00:12:04.148 }, 00:12:04.148 "auth": { 00:12:04.148 "state": "completed", 00:12:04.148 "digest": "sha384", 00:12:04.148 "dhgroup": "ffdhe4096" 00:12:04.148 } 00:12:04.148 } 00:12:04.148 ]' 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.148 00:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.148 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:04.148 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.148 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.148 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.149 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.407 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:04.407 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:04.974 00:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:05.540 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:05.540 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.541 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.801 00:12:05.801 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.801 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.801 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.062 { 00:12:06.062 "cntlid": 77, 00:12:06.062 "qid": 0, 00:12:06.062 "state": "enabled", 00:12:06.062 "thread": "nvmf_tgt_poll_group_000", 00:12:06.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:06.062 "listen_address": { 00:12:06.062 "trtype": "TCP", 00:12:06.062 "adrfam": "IPv4", 00:12:06.062 "traddr": "10.0.0.3", 00:12:06.062 "trsvcid": "4420" 00:12:06.062 }, 00:12:06.062 "peer_address": { 00:12:06.062 "trtype": "TCP", 00:12:06.062 "adrfam": "IPv4", 00:12:06.062 "traddr": "10.0.0.1", 00:12:06.062 "trsvcid": "42546" 00:12:06.062 }, 00:12:06.062 "auth": { 00:12:06.062 "state": "completed", 00:12:06.062 "digest": "sha384", 00:12:06.062 "dhgroup": "ffdhe4096" 00:12:06.062 } 00:12:06.062 } 00:12:06.062 ]' 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.062 00:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.321 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:06.321 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:06.888 00:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.146 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.147 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.147 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.714 00:12:07.714 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.714 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.714 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.973 { 00:12:07.973 "cntlid": 79, 00:12:07.973 "qid": 0, 00:12:07.973 "state": "enabled", 00:12:07.973 "thread": "nvmf_tgt_poll_group_000", 00:12:07.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:07.973 "listen_address": { 00:12:07.973 "trtype": "TCP", 00:12:07.973 "adrfam": "IPv4", 00:12:07.973 "traddr": "10.0.0.3", 00:12:07.973 "trsvcid": "4420" 00:12:07.973 }, 00:12:07.973 "peer_address": { 00:12:07.973 "trtype": "TCP", 00:12:07.973 "adrfam": "IPv4", 00:12:07.973 "traddr": "10.0.0.1", 00:12:07.973 "trsvcid": "42574" 00:12:07.973 }, 00:12:07.973 "auth": { 00:12:07.973 "state": "completed", 00:12:07.973 "digest": "sha384", 00:12:07.973 "dhgroup": "ffdhe4096" 00:12:07.973 } 00:12:07.973 } 00:12:07.973 ]' 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.973 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.974 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:07.974 00:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.232 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.232 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.232 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.491 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:08.491 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:09.058 00:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.316 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.573 00:12:09.831 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.832 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.832 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.090 { 00:12:10.090 "cntlid": 81, 00:12:10.090 "qid": 0, 00:12:10.090 "state": "enabled", 00:12:10.090 "thread": "nvmf_tgt_poll_group_000", 00:12:10.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:10.090 "listen_address": { 00:12:10.090 "trtype": "TCP", 00:12:10.090 "adrfam": "IPv4", 00:12:10.090 "traddr": "10.0.0.3", 00:12:10.090 "trsvcid": "4420" 00:12:10.090 }, 00:12:10.090 "peer_address": { 00:12:10.090 "trtype": "TCP", 00:12:10.090 "adrfam": "IPv4", 00:12:10.090 "traddr": "10.0.0.1", 00:12:10.090 "trsvcid": "42606" 00:12:10.090 }, 00:12:10.090 "auth": { 00:12:10.090 "state": "completed", 00:12:10.090 "digest": "sha384", 00:12:10.090 "dhgroup": "ffdhe6144" 00:12:10.090 } 00:12:10.090 } 00:12:10.090 ]' 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.090 00:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.090 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.090 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.090 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.090 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.090 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.657 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:10.657 00:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:11.224 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.483 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.050 00:12:12.050 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.050 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.050 00:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.310 { 00:12:12.310 "cntlid": 83, 00:12:12.310 "qid": 0, 00:12:12.310 "state": "enabled", 00:12:12.310 "thread": "nvmf_tgt_poll_group_000", 00:12:12.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:12.310 "listen_address": { 00:12:12.310 "trtype": "TCP", 00:12:12.310 "adrfam": "IPv4", 00:12:12.310 "traddr": "10.0.0.3", 00:12:12.310 "trsvcid": "4420" 00:12:12.310 }, 00:12:12.310 "peer_address": { 00:12:12.310 "trtype": "TCP", 00:12:12.310 "adrfam": "IPv4", 00:12:12.310 "traddr": "10.0.0.1", 00:12:12.310 "trsvcid": "42650" 00:12:12.310 }, 00:12:12.310 "auth": { 00:12:12.310 "state": "completed", 00:12:12.310 "digest": "sha384", 00:12:12.310 "dhgroup": "ffdhe6144" 00:12:12.310 } 00:12:12.310 } 00:12:12.310 ]' 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.310 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.569 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:12.569 00:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.504 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.071 00:12:14.071 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.071 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.071 00:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.330 { 00:12:14.330 "cntlid": 85, 00:12:14.330 "qid": 0, 00:12:14.330 "state": "enabled", 00:12:14.330 "thread": "nvmf_tgt_poll_group_000", 00:12:14.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:14.330 "listen_address": { 00:12:14.330 "trtype": "TCP", 00:12:14.330 "adrfam": "IPv4", 00:12:14.330 "traddr": "10.0.0.3", 00:12:14.330 "trsvcid": "4420" 00:12:14.330 }, 00:12:14.330 "peer_address": { 00:12:14.330 "trtype": "TCP", 00:12:14.330 "adrfam": "IPv4", 00:12:14.330 "traddr": "10.0.0.1", 00:12:14.330 "trsvcid": "50356" 00:12:14.330 }, 00:12:14.330 "auth": { 00:12:14.330 "state": "completed", 00:12:14.330 "digest": "sha384", 00:12:14.330 "dhgroup": "ffdhe6144" 00:12:14.330 } 00:12:14.330 } 00:12:14.330 ]' 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.330 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.589 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.589 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.589 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.847 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:14.847 00:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:15.413 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.671 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.237 00:12:16.237 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.237 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.237 00:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.237 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.237 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.237 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.237 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.512 { 00:12:16.512 "cntlid": 87, 00:12:16.512 "qid": 0, 00:12:16.512 "state": "enabled", 00:12:16.512 "thread": "nvmf_tgt_poll_group_000", 00:12:16.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:16.512 "listen_address": { 00:12:16.512 "trtype": "TCP", 00:12:16.512 "adrfam": "IPv4", 00:12:16.512 "traddr": "10.0.0.3", 00:12:16.512 "trsvcid": "4420" 00:12:16.512 }, 00:12:16.512 "peer_address": { 00:12:16.512 "trtype": "TCP", 00:12:16.512 "adrfam": "IPv4", 00:12:16.512 "traddr": "10.0.0.1", 00:12:16.512 "trsvcid": "50390" 00:12:16.512 }, 00:12:16.512 "auth": { 00:12:16.512 "state": "completed", 00:12:16.512 "digest": "sha384", 00:12:16.512 "dhgroup": "ffdhe6144" 00:12:16.512 } 00:12:16.512 } 00:12:16.512 ]' 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.512 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.783 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:16.783 00:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:17.351 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.611 00:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.178 00:12:18.178 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.178 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.178 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.437 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.438 { 00:12:18.438 "cntlid": 89, 00:12:18.438 "qid": 0, 00:12:18.438 "state": "enabled", 00:12:18.438 "thread": "nvmf_tgt_poll_group_000", 00:12:18.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:18.438 "listen_address": { 00:12:18.438 "trtype": "TCP", 00:12:18.438 "adrfam": "IPv4", 00:12:18.438 "traddr": "10.0.0.3", 00:12:18.438 "trsvcid": "4420" 00:12:18.438 }, 00:12:18.438 "peer_address": { 00:12:18.438 "trtype": "TCP", 00:12:18.438 "adrfam": "IPv4", 00:12:18.438 "traddr": "10.0.0.1", 00:12:18.438 "trsvcid": "50410" 00:12:18.438 }, 00:12:18.438 "auth": { 00:12:18.438 "state": "completed", 00:12:18.438 "digest": "sha384", 00:12:18.438 "dhgroup": "ffdhe8192" 00:12:18.438 } 00:12:18.438 } 00:12:18.438 ]' 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.438 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.697 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.697 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.697 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.955 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:18.955 00:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:19.529 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.787 00:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.354 00:12:20.354 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.354 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.354 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.613 { 00:12:20.613 "cntlid": 91, 00:12:20.613 "qid": 0, 00:12:20.613 "state": "enabled", 00:12:20.613 "thread": "nvmf_tgt_poll_group_000", 00:12:20.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:20.613 "listen_address": { 00:12:20.613 "trtype": "TCP", 00:12:20.613 "adrfam": "IPv4", 00:12:20.613 "traddr": "10.0.0.3", 00:12:20.613 "trsvcid": "4420" 00:12:20.613 }, 00:12:20.613 "peer_address": { 00:12:20.613 "trtype": "TCP", 00:12:20.613 "adrfam": "IPv4", 00:12:20.613 "traddr": "10.0.0.1", 00:12:20.613 "trsvcid": "50454" 00:12:20.613 }, 00:12:20.613 "auth": { 00:12:20.613 "state": "completed", 00:12:20.613 "digest": "sha384", 00:12:20.613 "dhgroup": "ffdhe8192" 00:12:20.613 } 00:12:20.613 } 00:12:20.613 ]' 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.613 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.180 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:21.180 00:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:21.748 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.006 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.007 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.007 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.007 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.007 00:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.574 00:12:22.574 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.574 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.574 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.143 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.143 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.143 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.143 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.143 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.143 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.143 { 00:12:23.144 "cntlid": 93, 00:12:23.144 "qid": 0, 00:12:23.144 "state": "enabled", 00:12:23.144 "thread": "nvmf_tgt_poll_group_000", 00:12:23.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:23.144 "listen_address": { 00:12:23.144 "trtype": "TCP", 00:12:23.144 "adrfam": "IPv4", 00:12:23.144 "traddr": "10.0.0.3", 00:12:23.144 "trsvcid": "4420" 00:12:23.144 }, 00:12:23.144 "peer_address": { 00:12:23.144 "trtype": "TCP", 00:12:23.144 "adrfam": "IPv4", 00:12:23.144 "traddr": "10.0.0.1", 00:12:23.144 "trsvcid": "35864" 00:12:23.144 }, 00:12:23.144 "auth": { 00:12:23.144 "state": "completed", 00:12:23.144 "digest": "sha384", 00:12:23.144 "dhgroup": "ffdhe8192" 00:12:23.144 } 00:12:23.144 } 00:12:23.144 ]' 00:12:23.144 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.144 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.144 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.144 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.144 00:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.144 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.144 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.144 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.403 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:23.403 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:23.970 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.229 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:24.229 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.229 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.229 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.229 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.229 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:24.229 00:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.487 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:25.053 00:12:25.053 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.053 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.053 00:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.311 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.311 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.312 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.312 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.312 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.312 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.312 { 00:12:25.312 "cntlid": 95, 00:12:25.312 "qid": 0, 00:12:25.312 "state": "enabled", 00:12:25.312 "thread": "nvmf_tgt_poll_group_000", 00:12:25.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:25.312 "listen_address": { 00:12:25.312 "trtype": "TCP", 00:12:25.312 "adrfam": "IPv4", 00:12:25.312 "traddr": "10.0.0.3", 00:12:25.312 "trsvcid": "4420" 00:12:25.312 }, 00:12:25.312 "peer_address": { 00:12:25.312 "trtype": "TCP", 00:12:25.312 "adrfam": "IPv4", 00:12:25.312 "traddr": "10.0.0.1", 00:12:25.312 "trsvcid": "35884" 00:12:25.312 }, 00:12:25.312 "auth": { 00:12:25.312 "state": "completed", 00:12:25.312 "digest": "sha384", 00:12:25.312 "dhgroup": "ffdhe8192" 00:12:25.312 } 00:12:25.312 } 00:12:25.312 ]' 00:12:25.312 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.312 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.312 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.570 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.570 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.570 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.570 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.570 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.829 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:25.829 00:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:26.396 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.963 00:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.239 00:12:27.239 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.239 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.239 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.510 { 00:12:27.510 "cntlid": 97, 00:12:27.510 "qid": 0, 00:12:27.510 "state": "enabled", 00:12:27.510 "thread": "nvmf_tgt_poll_group_000", 00:12:27.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:27.510 "listen_address": { 00:12:27.510 "trtype": "TCP", 00:12:27.510 "adrfam": "IPv4", 00:12:27.510 "traddr": "10.0.0.3", 00:12:27.510 "trsvcid": "4420" 00:12:27.510 }, 00:12:27.510 "peer_address": { 00:12:27.510 "trtype": "TCP", 00:12:27.510 "adrfam": "IPv4", 00:12:27.510 "traddr": "10.0.0.1", 00:12:27.510 "trsvcid": "35908" 00:12:27.510 }, 00:12:27.510 "auth": { 00:12:27.510 "state": "completed", 00:12:27.510 "digest": "sha512", 00:12:27.510 "dhgroup": "null" 00:12:27.510 } 00:12:27.510 } 00:12:27.510 ]' 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:27.510 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.769 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.769 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.769 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.027 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:28.027 00:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:28.595 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.854 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.113 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.113 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.113 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.113 00:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.372 00:12:29.372 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.372 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.372 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.631 { 00:12:29.631 "cntlid": 99, 00:12:29.631 "qid": 0, 00:12:29.631 "state": "enabled", 00:12:29.631 "thread": "nvmf_tgt_poll_group_000", 00:12:29.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:29.631 "listen_address": { 00:12:29.631 "trtype": "TCP", 00:12:29.631 "adrfam": "IPv4", 00:12:29.631 "traddr": "10.0.0.3", 00:12:29.631 "trsvcid": "4420" 00:12:29.631 }, 00:12:29.631 "peer_address": { 00:12:29.631 "trtype": "TCP", 00:12:29.631 "adrfam": "IPv4", 00:12:29.631 "traddr": "10.0.0.1", 00:12:29.631 "trsvcid": "35940" 00:12:29.631 }, 00:12:29.631 "auth": { 00:12:29.631 "state": "completed", 00:12:29.631 "digest": "sha512", 00:12:29.631 "dhgroup": "null" 00:12:29.631 } 00:12:29.631 } 00:12:29.631 ]' 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:29.631 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.890 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.890 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.890 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.148 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:30.148 00:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:30.715 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.974 00:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.232 00:12:31.232 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.232 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.232 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.799 { 00:12:31.799 "cntlid": 101, 00:12:31.799 "qid": 0, 00:12:31.799 "state": "enabled", 00:12:31.799 "thread": "nvmf_tgt_poll_group_000", 00:12:31.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:31.799 "listen_address": { 00:12:31.799 "trtype": "TCP", 00:12:31.799 "adrfam": "IPv4", 00:12:31.799 "traddr": "10.0.0.3", 00:12:31.799 "trsvcid": "4420" 00:12:31.799 }, 00:12:31.799 "peer_address": { 00:12:31.799 "trtype": "TCP", 00:12:31.799 "adrfam": "IPv4", 00:12:31.799 "traddr": "10.0.0.1", 00:12:31.799 "trsvcid": "35986" 00:12:31.799 }, 00:12:31.799 "auth": { 00:12:31.799 "state": "completed", 00:12:31.799 "digest": "sha512", 00:12:31.799 "dhgroup": "null" 00:12:31.799 } 00:12:31.799 } 00:12:31.799 ]' 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.799 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.800 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.800 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.800 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.800 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.057 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:32.057 00:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:32.993 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.994 00:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:33.253 00:12:33.253 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.253 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.253 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.821 { 00:12:33.821 "cntlid": 103, 00:12:33.821 "qid": 0, 00:12:33.821 "state": "enabled", 00:12:33.821 "thread": "nvmf_tgt_poll_group_000", 00:12:33.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:33.821 "listen_address": { 00:12:33.821 "trtype": "TCP", 00:12:33.821 "adrfam": "IPv4", 00:12:33.821 "traddr": "10.0.0.3", 00:12:33.821 "trsvcid": "4420" 00:12:33.821 }, 00:12:33.821 "peer_address": { 00:12:33.821 "trtype": "TCP", 00:12:33.821 "adrfam": "IPv4", 00:12:33.821 "traddr": "10.0.0.1", 00:12:33.821 "trsvcid": "41572" 00:12:33.821 }, 00:12:33.821 "auth": { 00:12:33.821 "state": "completed", 00:12:33.821 "digest": "sha512", 00:12:33.821 "dhgroup": "null" 00:12:33.821 } 00:12:33.821 } 00:12:33.821 ]' 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.821 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.822 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:33.822 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.822 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.822 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.822 00:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.080 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:34.080 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.017 00:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.276 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:35.535 00:12:35.535 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.535 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.535 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.794 { 00:12:35.794 "cntlid": 105, 00:12:35.794 "qid": 0, 00:12:35.794 "state": "enabled", 00:12:35.794 "thread": "nvmf_tgt_poll_group_000", 00:12:35.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:35.794 "listen_address": { 00:12:35.794 "trtype": "TCP", 00:12:35.794 "adrfam": "IPv4", 00:12:35.794 "traddr": "10.0.0.3", 00:12:35.794 "trsvcid": "4420" 00:12:35.794 }, 00:12:35.794 "peer_address": { 00:12:35.794 "trtype": "TCP", 00:12:35.794 "adrfam": "IPv4", 00:12:35.794 "traddr": "10.0.0.1", 00:12:35.794 "trsvcid": "41602" 00:12:35.794 }, 00:12:35.794 "auth": { 00:12:35.794 "state": "completed", 00:12:35.794 "digest": "sha512", 00:12:35.794 "dhgroup": "ffdhe2048" 00:12:35.794 } 00:12:35.794 } 00:12:35.794 ]' 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.794 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.053 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:36.053 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.053 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.053 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.053 00:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.313 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:36.313 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:36.888 00:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.147 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.406 00:12:37.665 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.665 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.665 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.924 { 00:12:37.924 "cntlid": 107, 00:12:37.924 "qid": 0, 00:12:37.924 "state": "enabled", 00:12:37.924 "thread": "nvmf_tgt_poll_group_000", 00:12:37.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:37.924 "listen_address": { 00:12:37.924 "trtype": "TCP", 00:12:37.924 "adrfam": "IPv4", 00:12:37.924 "traddr": "10.0.0.3", 00:12:37.924 "trsvcid": "4420" 00:12:37.924 }, 00:12:37.924 "peer_address": { 00:12:37.924 "trtype": "TCP", 00:12:37.924 "adrfam": "IPv4", 00:12:37.924 "traddr": "10.0.0.1", 00:12:37.924 "trsvcid": "41636" 00:12:37.924 }, 00:12:37.924 "auth": { 00:12:37.924 "state": "completed", 00:12:37.924 "digest": "sha512", 00:12:37.924 "dhgroup": "ffdhe2048" 00:12:37.924 } 00:12:37.924 } 00:12:37.924 ]' 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.924 00:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.183 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:38.184 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:38.751 00:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:39.010 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:39.010 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.010 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.010 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.010 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:39.010 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.011 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.011 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.011 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.269 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.269 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.269 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.269 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.527 00:12:39.527 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.527 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.527 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.786 { 00:12:39.786 "cntlid": 109, 00:12:39.786 "qid": 0, 00:12:39.786 "state": "enabled", 00:12:39.786 "thread": "nvmf_tgt_poll_group_000", 00:12:39.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:39.786 "listen_address": { 00:12:39.786 "trtype": "TCP", 00:12:39.786 "adrfam": "IPv4", 00:12:39.786 "traddr": "10.0.0.3", 00:12:39.786 "trsvcid": "4420" 00:12:39.786 }, 00:12:39.786 "peer_address": { 00:12:39.786 "trtype": "TCP", 00:12:39.786 "adrfam": "IPv4", 00:12:39.786 "traddr": "10.0.0.1", 00:12:39.786 "trsvcid": "41662" 00:12:39.786 }, 00:12:39.786 "auth": { 00:12:39.786 "state": "completed", 00:12:39.786 "digest": "sha512", 00:12:39.786 "dhgroup": "ffdhe2048" 00:12:39.786 } 00:12:39.786 } 00:12:39.786 ]' 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:39.786 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.045 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.045 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.045 00:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.303 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:40.303 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:40.871 00:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.131 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.699 00:12:41.699 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.699 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.699 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.958 { 00:12:41.958 "cntlid": 111, 00:12:41.958 "qid": 0, 00:12:41.958 "state": "enabled", 00:12:41.958 "thread": "nvmf_tgt_poll_group_000", 00:12:41.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:41.958 "listen_address": { 00:12:41.958 "trtype": "TCP", 00:12:41.958 "adrfam": "IPv4", 00:12:41.958 "traddr": "10.0.0.3", 00:12:41.958 "trsvcid": "4420" 00:12:41.958 }, 00:12:41.958 "peer_address": { 00:12:41.958 "trtype": "TCP", 00:12:41.958 "adrfam": "IPv4", 00:12:41.958 "traddr": "10.0.0.1", 00:12:41.958 "trsvcid": "41688" 00:12:41.958 }, 00:12:41.958 "auth": { 00:12:41.958 "state": "completed", 00:12:41.958 "digest": "sha512", 00:12:41.958 "dhgroup": "ffdhe2048" 00:12:41.958 } 00:12:41.958 } 00:12:41.958 ]' 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.958 00:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.528 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:42.528 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.096 00:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.356 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.615 00:12:43.615 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.615 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.615 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.874 { 00:12:43.874 "cntlid": 113, 00:12:43.874 "qid": 0, 00:12:43.874 "state": "enabled", 00:12:43.874 "thread": "nvmf_tgt_poll_group_000", 00:12:43.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:43.874 "listen_address": { 00:12:43.874 "trtype": "TCP", 00:12:43.874 "adrfam": "IPv4", 00:12:43.874 "traddr": "10.0.0.3", 00:12:43.874 "trsvcid": "4420" 00:12:43.874 }, 00:12:43.874 "peer_address": { 00:12:43.874 "trtype": "TCP", 00:12:43.874 "adrfam": "IPv4", 00:12:43.874 "traddr": "10.0.0.1", 00:12:43.874 "trsvcid": "55482" 00:12:43.874 }, 00:12:43.874 "auth": { 00:12:43.874 "state": "completed", 00:12:43.874 "digest": "sha512", 00:12:43.874 "dhgroup": "ffdhe3072" 00:12:43.874 } 00:12:43.874 } 00:12:43.874 ]' 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.874 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.134 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.134 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.134 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.134 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.134 00:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.393 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:44.393 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:44.961 00:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.529 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.788 00:12:45.788 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.788 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.788 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.056 { 00:12:46.056 "cntlid": 115, 00:12:46.056 "qid": 0, 00:12:46.056 "state": "enabled", 00:12:46.056 "thread": "nvmf_tgt_poll_group_000", 00:12:46.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:46.056 "listen_address": { 00:12:46.056 "trtype": "TCP", 00:12:46.056 "adrfam": "IPv4", 00:12:46.056 "traddr": "10.0.0.3", 00:12:46.056 "trsvcid": "4420" 00:12:46.056 }, 00:12:46.056 "peer_address": { 00:12:46.056 "trtype": "TCP", 00:12:46.056 "adrfam": "IPv4", 00:12:46.056 "traddr": "10.0.0.1", 00:12:46.056 "trsvcid": "55512" 00:12:46.056 }, 00:12:46.056 "auth": { 00:12:46.056 "state": "completed", 00:12:46.056 "digest": "sha512", 00:12:46.056 "dhgroup": "ffdhe3072" 00:12:46.056 } 00:12:46.056 } 00:12:46.056 ]' 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.056 00:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.056 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:46.056 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.328 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.329 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.329 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.329 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:46.329 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:46.896 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.155 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:47.155 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.155 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.155 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.155 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.155 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.155 00:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.413 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.672 00:12:47.672 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.672 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.672 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.930 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.930 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.930 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.930 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.930 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.931 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.931 { 00:12:47.931 "cntlid": 117, 00:12:47.931 "qid": 0, 00:12:47.931 "state": "enabled", 00:12:47.931 "thread": "nvmf_tgt_poll_group_000", 00:12:47.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:47.931 "listen_address": { 00:12:47.931 "trtype": "TCP", 00:12:47.931 "adrfam": "IPv4", 00:12:47.931 "traddr": "10.0.0.3", 00:12:47.931 "trsvcid": "4420" 00:12:47.931 }, 00:12:47.931 "peer_address": { 00:12:47.931 "trtype": "TCP", 00:12:47.931 "adrfam": "IPv4", 00:12:47.931 "traddr": "10.0.0.1", 00:12:47.931 "trsvcid": "55528" 00:12:47.931 }, 00:12:47.931 "auth": { 00:12:47.931 "state": "completed", 00:12:47.931 "digest": "sha512", 00:12:47.931 "dhgroup": "ffdhe3072" 00:12:47.931 } 00:12:47.931 } 00:12:47.931 ]' 00:12:47.931 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.931 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.931 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.931 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.931 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.188 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.188 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.188 00:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.447 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:48.447 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:49.077 00:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:49.334 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:49.334 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.334 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.334 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:49.334 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:49.334 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.335 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:12:49.335 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.335 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.335 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.335 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:49.335 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.335 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.593 00:12:49.593 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.593 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.593 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.851 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.851 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.851 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.851 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.851 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.110 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.110 { 00:12:50.110 "cntlid": 119, 00:12:50.110 "qid": 0, 00:12:50.110 "state": "enabled", 00:12:50.110 "thread": "nvmf_tgt_poll_group_000", 00:12:50.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:50.110 "listen_address": { 00:12:50.110 "trtype": "TCP", 00:12:50.110 "adrfam": "IPv4", 00:12:50.110 "traddr": "10.0.0.3", 00:12:50.110 "trsvcid": "4420" 00:12:50.110 }, 00:12:50.110 "peer_address": { 00:12:50.110 "trtype": "TCP", 00:12:50.110 "adrfam": "IPv4", 00:12:50.110 "traddr": "10.0.0.1", 00:12:50.110 "trsvcid": "55546" 00:12:50.110 }, 00:12:50.110 "auth": { 00:12:50.110 "state": "completed", 00:12:50.110 "digest": "sha512", 00:12:50.110 "dhgroup": "ffdhe3072" 00:12:50.110 } 00:12:50.110 } 00:12:50.110 ]' 00:12:50.110 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.110 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.110 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.110 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:50.110 00:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.110 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.110 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.110 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.368 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:50.368 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:50.935 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.935 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:50.935 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.935 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.194 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.194 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:51.194 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.194 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:51.194 00:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.453 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.712 00:12:51.712 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.712 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.712 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.971 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.971 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.971 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.971 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.971 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.971 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.971 { 00:12:51.971 "cntlid": 121, 00:12:51.971 "qid": 0, 00:12:51.971 "state": "enabled", 00:12:51.971 "thread": "nvmf_tgt_poll_group_000", 00:12:51.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:51.971 "listen_address": { 00:12:51.971 "trtype": "TCP", 00:12:51.971 "adrfam": "IPv4", 00:12:51.971 "traddr": "10.0.0.3", 00:12:51.971 "trsvcid": "4420" 00:12:51.971 }, 00:12:51.971 "peer_address": { 00:12:51.971 "trtype": "TCP", 00:12:51.971 "adrfam": "IPv4", 00:12:51.971 "traddr": "10.0.0.1", 00:12:51.971 "trsvcid": "55586" 00:12:51.971 }, 00:12:51.971 "auth": { 00:12:51.971 "state": "completed", 00:12:51.971 "digest": "sha512", 00:12:51.971 "dhgroup": "ffdhe4096" 00:12:51.971 } 00:12:51.971 } 00:12:51.971 ]' 00:12:51.971 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.230 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.230 00:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.230 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:52.230 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.230 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.230 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.230 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.488 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:52.488 00:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:53.056 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.314 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.315 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.882 00:12:53.882 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.882 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.882 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.140 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.140 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.140 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.140 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.140 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.140 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.140 { 00:12:54.140 "cntlid": 123, 00:12:54.140 "qid": 0, 00:12:54.140 "state": "enabled", 00:12:54.140 "thread": "nvmf_tgt_poll_group_000", 00:12:54.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:54.140 "listen_address": { 00:12:54.140 "trtype": "TCP", 00:12:54.140 "adrfam": "IPv4", 00:12:54.140 "traddr": "10.0.0.3", 00:12:54.140 "trsvcid": "4420" 00:12:54.141 }, 00:12:54.141 "peer_address": { 00:12:54.141 "trtype": "TCP", 00:12:54.141 "adrfam": "IPv4", 00:12:54.141 "traddr": "10.0.0.1", 00:12:54.141 "trsvcid": "52266" 00:12:54.141 }, 00:12:54.141 "auth": { 00:12:54.141 "state": "completed", 00:12:54.141 "digest": "sha512", 00:12:54.141 "dhgroup": "ffdhe4096" 00:12:54.141 } 00:12:54.141 } 00:12:54.141 ]' 00:12:54.141 00:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.141 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.141 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.141 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.141 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.141 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.141 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.141 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.708 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:54.708 00:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:55.275 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.534 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.101 00:12:56.101 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.101 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.101 00:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.360 { 00:12:56.360 "cntlid": 125, 00:12:56.360 "qid": 0, 00:12:56.360 "state": "enabled", 00:12:56.360 "thread": "nvmf_tgt_poll_group_000", 00:12:56.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:56.360 "listen_address": { 00:12:56.360 "trtype": "TCP", 00:12:56.360 "adrfam": "IPv4", 00:12:56.360 "traddr": "10.0.0.3", 00:12:56.360 "trsvcid": "4420" 00:12:56.360 }, 00:12:56.360 "peer_address": { 00:12:56.360 "trtype": "TCP", 00:12:56.360 "adrfam": "IPv4", 00:12:56.360 "traddr": "10.0.0.1", 00:12:56.360 "trsvcid": "52294" 00:12:56.360 }, 00:12:56.360 "auth": { 00:12:56.360 "state": "completed", 00:12:56.360 "digest": "sha512", 00:12:56.360 "dhgroup": "ffdhe4096" 00:12:56.360 } 00:12:56.360 } 00:12:56.360 ]' 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.360 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.619 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:56.619 00:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:57.186 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.445 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.013 00:12:58.013 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.013 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.013 00:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.271 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.271 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.271 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.271 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.271 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.271 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.271 { 00:12:58.271 "cntlid": 127, 00:12:58.271 "qid": 0, 00:12:58.271 "state": "enabled", 00:12:58.271 "thread": "nvmf_tgt_poll_group_000", 00:12:58.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:12:58.271 "listen_address": { 00:12:58.271 "trtype": "TCP", 00:12:58.271 "adrfam": "IPv4", 00:12:58.271 "traddr": "10.0.0.3", 00:12:58.272 "trsvcid": "4420" 00:12:58.272 }, 00:12:58.272 "peer_address": { 00:12:58.272 "trtype": "TCP", 00:12:58.272 "adrfam": "IPv4", 00:12:58.272 "traddr": "10.0.0.1", 00:12:58.272 "trsvcid": "52310" 00:12:58.272 }, 00:12:58.272 "auth": { 00:12:58.272 "state": "completed", 00:12:58.272 "digest": "sha512", 00:12:58.272 "dhgroup": "ffdhe4096" 00:12:58.272 } 00:12:58.272 } 00:12:58.272 ]' 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.272 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.839 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:58.839 00:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.406 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.664 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.664 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.664 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.664 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.231 00:13:00.231 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.231 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.231 00:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.490 { 00:13:00.490 "cntlid": 129, 00:13:00.490 "qid": 0, 00:13:00.490 "state": "enabled", 00:13:00.490 "thread": "nvmf_tgt_poll_group_000", 00:13:00.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:00.490 "listen_address": { 00:13:00.490 "trtype": "TCP", 00:13:00.490 "adrfam": "IPv4", 00:13:00.490 "traddr": "10.0.0.3", 00:13:00.490 "trsvcid": "4420" 00:13:00.490 }, 00:13:00.490 "peer_address": { 00:13:00.490 "trtype": "TCP", 00:13:00.490 "adrfam": "IPv4", 00:13:00.490 "traddr": "10.0.0.1", 00:13:00.490 "trsvcid": "52338" 00:13:00.490 }, 00:13:00.490 "auth": { 00:13:00.490 "state": "completed", 00:13:00.490 "digest": "sha512", 00:13:00.490 "dhgroup": "ffdhe6144" 00:13:00.490 } 00:13:00.490 } 00:13:00.490 ]' 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.490 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.749 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:13:00.749 00:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:13:01.325 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.586 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:01.586 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.586 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.586 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.586 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.586 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:01.586 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.844 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.845 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.845 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.845 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.845 00:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.412 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.412 { 00:13:02.412 "cntlid": 131, 00:13:02.412 "qid": 0, 00:13:02.412 "state": "enabled", 00:13:02.412 "thread": "nvmf_tgt_poll_group_000", 00:13:02.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:02.412 "listen_address": { 00:13:02.412 "trtype": "TCP", 00:13:02.412 "adrfam": "IPv4", 00:13:02.412 "traddr": "10.0.0.3", 00:13:02.412 "trsvcid": "4420" 00:13:02.412 }, 00:13:02.412 "peer_address": { 00:13:02.412 "trtype": "TCP", 00:13:02.412 "adrfam": "IPv4", 00:13:02.412 "traddr": "10.0.0.1", 00:13:02.412 "trsvcid": "59346" 00:13:02.412 }, 00:13:02.412 "auth": { 00:13:02.412 "state": "completed", 00:13:02.412 "digest": "sha512", 00:13:02.412 "dhgroup": "ffdhe6144" 00:13:02.412 } 00:13:02.412 } 00:13:02.412 ]' 00:13:02.412 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.671 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.671 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.671 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.671 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.671 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.671 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.671 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.929 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:13:02.930 00:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.497 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.756 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.028 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.028 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.028 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.028 00:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.300 00:13:04.300 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.300 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.300 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.559 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.559 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.559 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.559 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.559 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.559 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.559 { 00:13:04.559 "cntlid": 133, 00:13:04.559 "qid": 0, 00:13:04.559 "state": "enabled", 00:13:04.559 "thread": "nvmf_tgt_poll_group_000", 00:13:04.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:04.559 "listen_address": { 00:13:04.559 "trtype": "TCP", 00:13:04.559 "adrfam": "IPv4", 00:13:04.559 "traddr": "10.0.0.3", 00:13:04.559 "trsvcid": "4420" 00:13:04.559 }, 00:13:04.559 "peer_address": { 00:13:04.559 "trtype": "TCP", 00:13:04.559 "adrfam": "IPv4", 00:13:04.559 "traddr": "10.0.0.1", 00:13:04.559 "trsvcid": "59386" 00:13:04.559 }, 00:13:04.559 "auth": { 00:13:04.559 "state": "completed", 00:13:04.559 "digest": "sha512", 00:13:04.559 "dhgroup": "ffdhe6144" 00:13:04.559 } 00:13:04.559 } 00:13:04.559 ]' 00:13:04.559 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.817 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.817 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.817 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.817 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.817 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.817 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.817 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.076 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:13:05.076 00:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.644 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.903 00:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.470 00:13:06.470 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.470 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.470 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.729 { 00:13:06.729 "cntlid": 135, 00:13:06.729 "qid": 0, 00:13:06.729 "state": "enabled", 00:13:06.729 "thread": "nvmf_tgt_poll_group_000", 00:13:06.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:06.729 "listen_address": { 00:13:06.729 "trtype": "TCP", 00:13:06.729 "adrfam": "IPv4", 00:13:06.729 "traddr": "10.0.0.3", 00:13:06.729 "trsvcid": "4420" 00:13:06.729 }, 00:13:06.729 "peer_address": { 00:13:06.729 "trtype": "TCP", 00:13:06.729 "adrfam": "IPv4", 00:13:06.729 "traddr": "10.0.0.1", 00:13:06.729 "trsvcid": "59406" 00:13:06.729 }, 00:13:06.729 "auth": { 00:13:06.729 "state": "completed", 00:13:06.729 "digest": "sha512", 00:13:06.729 "dhgroup": "ffdhe6144" 00:13:06.729 } 00:13:06.729 } 00:13:06.729 ]' 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.729 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.987 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.987 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.987 00:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.246 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:07.246 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:07.813 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.071 00:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.639 00:13:08.639 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.639 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.639 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.897 { 00:13:08.897 "cntlid": 137, 00:13:08.897 "qid": 0, 00:13:08.897 "state": "enabled", 00:13:08.897 "thread": "nvmf_tgt_poll_group_000", 00:13:08.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:08.897 "listen_address": { 00:13:08.897 "trtype": "TCP", 00:13:08.897 "adrfam": "IPv4", 00:13:08.897 "traddr": "10.0.0.3", 00:13:08.897 "trsvcid": "4420" 00:13:08.897 }, 00:13:08.897 "peer_address": { 00:13:08.897 "trtype": "TCP", 00:13:08.897 "adrfam": "IPv4", 00:13:08.897 "traddr": "10.0.0.1", 00:13:08.897 "trsvcid": "59434" 00:13:08.897 }, 00:13:08.897 "auth": { 00:13:08.897 "state": "completed", 00:13:08.897 "digest": "sha512", 00:13:08.897 "dhgroup": "ffdhe8192" 00:13:08.897 } 00:13:08.897 } 00:13:08.897 ]' 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.897 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.898 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.156 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.156 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.156 00:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.414 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:13:09.414 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:09.981 00:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.239 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.807 00:13:10.807 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.807 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.807 00:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.066 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.066 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.066 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.066 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.066 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.066 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.066 { 00:13:11.066 "cntlid": 139, 00:13:11.066 "qid": 0, 00:13:11.066 "state": "enabled", 00:13:11.066 "thread": "nvmf_tgt_poll_group_000", 00:13:11.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:11.066 "listen_address": { 00:13:11.066 "trtype": "TCP", 00:13:11.066 "adrfam": "IPv4", 00:13:11.066 "traddr": "10.0.0.3", 00:13:11.066 "trsvcid": "4420" 00:13:11.066 }, 00:13:11.066 "peer_address": { 00:13:11.066 "trtype": "TCP", 00:13:11.066 "adrfam": "IPv4", 00:13:11.066 "traddr": "10.0.0.1", 00:13:11.066 "trsvcid": "59472" 00:13:11.066 }, 00:13:11.066 "auth": { 00:13:11.066 "state": "completed", 00:13:11.066 "digest": "sha512", 00:13:11.066 "dhgroup": "ffdhe8192" 00:13:11.066 } 00:13:11.066 } 00:13:11.066 ]' 00:13:11.066 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.325 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.325 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.325 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.325 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.325 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.325 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.325 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.583 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:13:11.583 00:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: --dhchap-ctrl-secret DHHC-1:02:NzM4YzZmNDYzZDRlZmQ4ZDUxNjFjMzFmYzBlZTM3MmJmNjVkODkzMzdiNjkwNjU06UaWzA==: 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.151 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.409 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.410 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.988 00:13:12.988 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.988 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.988 00:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.571 { 00:13:13.571 "cntlid": 141, 00:13:13.571 "qid": 0, 00:13:13.571 "state": "enabled", 00:13:13.571 "thread": "nvmf_tgt_poll_group_000", 00:13:13.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:13.571 "listen_address": { 00:13:13.571 "trtype": "TCP", 00:13:13.571 "adrfam": "IPv4", 00:13:13.571 "traddr": "10.0.0.3", 00:13:13.571 "trsvcid": "4420" 00:13:13.571 }, 00:13:13.571 "peer_address": { 00:13:13.571 "trtype": "TCP", 00:13:13.571 "adrfam": "IPv4", 00:13:13.571 "traddr": "10.0.0.1", 00:13:13.571 "trsvcid": "53134" 00:13:13.571 }, 00:13:13.571 "auth": { 00:13:13.571 "state": "completed", 00:13:13.571 "digest": "sha512", 00:13:13.571 "dhgroup": "ffdhe8192" 00:13:13.571 } 00:13:13.571 } 00:13:13.571 ]' 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.571 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.830 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:13:13.830 00:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:01:ZWQ5NDhmMTQ4ZTA0ZmRlZjUwYzJlMDc0OWU5ZjcwZGUufE1I: 00:13:14.766 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.766 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:14.766 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.766 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.767 00:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.704 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.704 { 00:13:15.704 "cntlid": 143, 00:13:15.704 "qid": 0, 00:13:15.704 "state": "enabled", 00:13:15.704 "thread": "nvmf_tgt_poll_group_000", 00:13:15.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:15.704 "listen_address": { 00:13:15.704 "trtype": "TCP", 00:13:15.704 "adrfam": "IPv4", 00:13:15.704 "traddr": "10.0.0.3", 00:13:15.704 "trsvcid": "4420" 00:13:15.704 }, 00:13:15.704 "peer_address": { 00:13:15.704 "trtype": "TCP", 00:13:15.704 "adrfam": "IPv4", 00:13:15.704 "traddr": "10.0.0.1", 00:13:15.704 "trsvcid": "53168" 00:13:15.704 }, 00:13:15.704 "auth": { 00:13:15.704 "state": "completed", 00:13:15.704 "digest": "sha512", 00:13:15.704 "dhgroup": "ffdhe8192" 00:13:15.704 } 00:13:15.704 } 00:13:15.704 ]' 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.704 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.963 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.963 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.963 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.963 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.963 00:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.221 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:16.221 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:16.790 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.049 00:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.616 00:13:17.616 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.616 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.616 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.875 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.875 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.875 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.875 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.875 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.875 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.875 { 00:13:17.875 "cntlid": 145, 00:13:17.875 "qid": 0, 00:13:17.875 "state": "enabled", 00:13:17.875 "thread": "nvmf_tgt_poll_group_000", 00:13:17.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:17.875 "listen_address": { 00:13:17.875 "trtype": "TCP", 00:13:17.875 "adrfam": "IPv4", 00:13:17.875 "traddr": "10.0.0.3", 00:13:17.875 "trsvcid": "4420" 00:13:17.875 }, 00:13:17.875 "peer_address": { 00:13:17.875 "trtype": "TCP", 00:13:17.875 "adrfam": "IPv4", 00:13:17.875 "traddr": "10.0.0.1", 00:13:17.875 "trsvcid": "53182" 00:13:17.875 }, 00:13:17.875 "auth": { 00:13:17.875 "state": "completed", 00:13:17.875 "digest": "sha512", 00:13:17.875 "dhgroup": "ffdhe8192" 00:13:17.875 } 00:13:17.875 } 00:13:17.875 ]' 00:13:17.875 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.133 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.133 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.133 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.134 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.134 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.134 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.134 00:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.392 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:13:18.392 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:00:NTFiOTU5NDBhMTcxNTA0NDIxZTBmNTFiN2U3ZTRjYjlkYWVjNTQ1NWM0YjJiM2Y5xxMAtA==: --dhchap-ctrl-secret DHHC-1:03:ZjBmMTk2NjEwZWVjODlkZjgxMjAxNjc5NTM5ZTcyZjM4ZDllNjA0MTY1MTdiMjNkMzc4MmRmZGUxYWY3ZTYxMlTis34=: 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:18.959 00:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:19.901 request: 00:13:19.901 { 00:13:19.901 "name": "nvme0", 00:13:19.901 "trtype": "tcp", 00:13:19.901 "traddr": "10.0.0.3", 00:13:19.901 "adrfam": "ipv4", 00:13:19.901 "trsvcid": "4420", 00:13:19.901 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:19.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:19.901 "prchk_reftag": false, 00:13:19.901 "prchk_guard": false, 00:13:19.901 "hdgst": false, 00:13:19.901 "ddgst": false, 00:13:19.901 "dhchap_key": "key2", 00:13:19.901 "allow_unrecognized_csi": false, 00:13:19.901 "method": "bdev_nvme_attach_controller", 00:13:19.901 "req_id": 1 00:13:19.901 } 00:13:19.901 Got JSON-RPC error response 00:13:19.901 response: 00:13:19.901 { 00:13:19.901 "code": -5, 00:13:19.901 "message": "Input/output error" 00:13:19.901 } 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:19.901 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.902 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:19.902 00:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:20.469 request: 00:13:20.469 { 00:13:20.469 "name": "nvme0", 00:13:20.469 "trtype": "tcp", 00:13:20.469 "traddr": "10.0.0.3", 00:13:20.469 "adrfam": "ipv4", 00:13:20.469 "trsvcid": "4420", 00:13:20.469 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:20.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:20.469 "prchk_reftag": false, 00:13:20.469 "prchk_guard": false, 00:13:20.469 "hdgst": false, 00:13:20.469 "ddgst": false, 00:13:20.469 "dhchap_key": "key1", 00:13:20.469 "dhchap_ctrlr_key": "ckey2", 00:13:20.469 "allow_unrecognized_csi": false, 00:13:20.469 "method": "bdev_nvme_attach_controller", 00:13:20.469 "req_id": 1 00:13:20.469 } 00:13:20.469 Got JSON-RPC error response 00:13:20.469 response: 00:13:20.469 { 00:13:20.469 "code": -5, 00:13:20.469 "message": "Input/output error" 00:13:20.469 } 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 00:13:20.469 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.470 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.037 request: 00:13:21.037 { 00:13:21.037 "name": "nvme0", 00:13:21.037 "trtype": "tcp", 00:13:21.037 "traddr": "10.0.0.3", 00:13:21.037 "adrfam": "ipv4", 00:13:21.037 "trsvcid": "4420", 00:13:21.037 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:21.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:21.037 "prchk_reftag": false, 00:13:21.037 "prchk_guard": false, 00:13:21.037 "hdgst": false, 00:13:21.037 "ddgst": false, 00:13:21.037 "dhchap_key": "key1", 00:13:21.037 "dhchap_ctrlr_key": "ckey1", 00:13:21.037 "allow_unrecognized_csi": false, 00:13:21.037 "method": "bdev_nvme_attach_controller", 00:13:21.037 "req_id": 1 00:13:21.037 } 00:13:21.037 Got JSON-RPC error response 00:13:21.037 response: 00:13:21.037 { 00:13:21.037 "code": -5, 00:13:21.037 "message": "Input/output error" 00:13:21.037 } 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 79089 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79089 ']' 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79089 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79089 00:13:21.037 killing process with pid 79089 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79089' 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79089 00:13:21.037 00:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79089 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=82145 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 82145 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82145 ']' 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.296 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82145 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82145 ']' 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.555 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.813 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.813 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:21.813 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:21.813 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.813 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 null0 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3oO 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.t5g ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.t5g 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Qit 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.CJd ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CJd 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Y05 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.RLD ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RLD 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1nW 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.071 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.072 00:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.007 nvme0n1 00:13:23.007 00:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.007 00:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.007 00:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.266 { 00:13:23.266 "cntlid": 1, 00:13:23.266 "qid": 0, 00:13:23.266 "state": "enabled", 00:13:23.266 "thread": "nvmf_tgt_poll_group_000", 00:13:23.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:23.266 "listen_address": { 00:13:23.266 "trtype": "TCP", 00:13:23.266 "adrfam": "IPv4", 00:13:23.266 "traddr": "10.0.0.3", 00:13:23.266 "trsvcid": "4420" 00:13:23.266 }, 00:13:23.266 "peer_address": { 00:13:23.266 "trtype": "TCP", 00:13:23.266 "adrfam": "IPv4", 00:13:23.266 "traddr": "10.0.0.1", 00:13:23.266 "trsvcid": "49554" 00:13:23.266 }, 00:13:23.266 "auth": { 00:13:23.266 "state": "completed", 00:13:23.266 "digest": "sha512", 00:13:23.266 "dhgroup": "ffdhe8192" 00:13:23.266 } 00:13:23.266 } 00:13:23.266 ]' 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.266 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.525 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.525 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.525 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.525 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.525 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.784 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:23.784 00:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:24.350 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key3 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:24.609 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.868 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.126 request: 00:13:25.126 { 00:13:25.126 "name": "nvme0", 00:13:25.126 "trtype": "tcp", 00:13:25.126 "traddr": "10.0.0.3", 00:13:25.126 "adrfam": "ipv4", 00:13:25.126 "trsvcid": "4420", 00:13:25.126 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:25.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:25.126 "prchk_reftag": false, 00:13:25.126 "prchk_guard": false, 00:13:25.126 "hdgst": false, 00:13:25.126 "ddgst": false, 00:13:25.126 "dhchap_key": "key3", 00:13:25.126 "allow_unrecognized_csi": false, 00:13:25.126 "method": "bdev_nvme_attach_controller", 00:13:25.126 "req_id": 1 00:13:25.126 } 00:13:25.126 Got JSON-RPC error response 00:13:25.126 response: 00:13:25.126 { 00:13:25.126 "code": -5, 00:13:25.126 "message": "Input/output error" 00:13:25.126 } 00:13:25.126 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:25.126 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.126 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.126 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.127 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:25.127 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:25.127 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:25.127 00:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:25.417 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:25.417 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:25.417 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:25.417 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:25.417 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.417 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:25.417 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.418 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:25.418 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.418 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.696 request: 00:13:25.696 { 00:13:25.696 "name": "nvme0", 00:13:25.696 "trtype": "tcp", 00:13:25.696 "traddr": "10.0.0.3", 00:13:25.696 "adrfam": "ipv4", 00:13:25.696 "trsvcid": "4420", 00:13:25.696 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:25.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:25.696 "prchk_reftag": false, 00:13:25.696 "prchk_guard": false, 00:13:25.696 "hdgst": false, 00:13:25.696 "ddgst": false, 00:13:25.696 "dhchap_key": "key3", 00:13:25.696 "allow_unrecognized_csi": false, 00:13:25.696 "method": "bdev_nvme_attach_controller", 00:13:25.696 "req_id": 1 00:13:25.696 } 00:13:25.696 Got JSON-RPC error response 00:13:25.696 response: 00:13:25.696 { 00:13:25.696 "code": -5, 00:13:25.696 "message": "Input/output error" 00:13:25.696 } 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.696 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.961 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.962 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:25.962 00:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:26.528 request: 00:13:26.528 { 00:13:26.528 "name": "nvme0", 00:13:26.528 "trtype": "tcp", 00:13:26.528 "traddr": "10.0.0.3", 00:13:26.528 "adrfam": "ipv4", 00:13:26.528 "trsvcid": "4420", 00:13:26.528 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:26.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:26.528 "prchk_reftag": false, 00:13:26.528 "prchk_guard": false, 00:13:26.528 "hdgst": false, 00:13:26.528 "ddgst": false, 00:13:26.528 "dhchap_key": "key0", 00:13:26.528 "dhchap_ctrlr_key": "key1", 00:13:26.528 "allow_unrecognized_csi": false, 00:13:26.528 "method": "bdev_nvme_attach_controller", 00:13:26.528 "req_id": 1 00:13:26.528 } 00:13:26.528 Got JSON-RPC error response 00:13:26.528 response: 00:13:26.528 { 00:13:26.528 "code": -5, 00:13:26.528 "message": "Input/output error" 00:13:26.528 } 00:13:26.528 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:26.528 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.528 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.528 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.528 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:26.528 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:26.528 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:26.787 nvme0n1 00:13:26.787 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:26.787 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.787 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:27.045 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.045 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.045 00:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.303 00:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 00:13:27.303 00:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.303 00:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.303 00:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.303 00:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:27.303 00:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:27.303 00:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:28.238 nvme0n1 00:13:28.496 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:28.496 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:28.496 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.754 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.755 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:28.755 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.755 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.755 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.755 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:28.755 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.755 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:29.013 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.013 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:29.013 00:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid 93817295-c2e4-400f-aefe-caa93fc06858 -l 0 --dhchap-secret DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: --dhchap-ctrl-secret DHHC-1:03:YzY1ZDg4ZmQzZDRjNGJlMWM2Mzc1MWRiMzU1MjFjMGUyMmZkMGNjNDNkZGEyZGU1NTIwOWMxY2JjY2EyYmJjZLEN8fE=: 00:13:29.590 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:29.590 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:29.591 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:29.591 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:29.591 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:29.591 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:29.591 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:29.591 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.591 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:29.849 00:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:30.414 request: 00:13:30.415 { 00:13:30.415 "name": "nvme0", 00:13:30.415 "trtype": "tcp", 00:13:30.415 "traddr": "10.0.0.3", 00:13:30.415 "adrfam": "ipv4", 00:13:30.415 "trsvcid": "4420", 00:13:30.415 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:30.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858", 00:13:30.415 "prchk_reftag": false, 00:13:30.415 "prchk_guard": false, 00:13:30.415 "hdgst": false, 00:13:30.415 "ddgst": false, 00:13:30.415 "dhchap_key": "key1", 00:13:30.415 "allow_unrecognized_csi": false, 00:13:30.415 "method": "bdev_nvme_attach_controller", 00:13:30.415 "req_id": 1 00:13:30.415 } 00:13:30.415 Got JSON-RPC error response 00:13:30.415 response: 00:13:30.415 { 00:13:30.415 "code": -5, 00:13:30.415 "message": "Input/output error" 00:13:30.415 } 00:13:30.415 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:30.415 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:30.415 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:30.415 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:30.415 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:30.415 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:30.415 00:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:31.349 nvme0n1 00:13:31.608 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:31.608 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.608 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:31.867 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.867 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.867 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.126 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:32.126 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.126 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.126 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.126 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:32.126 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:32.126 00:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:32.385 nvme0n1 00:13:32.385 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:32.385 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:32.385 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.644 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.644 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.644 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: '' 2s 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: ]] 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzhmNWI4OGVkOTNjNTdlZjJkZGFkNDRlMjA0Yjg1ODZnNz/h: 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:32.903 00:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:34.807 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:34.807 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:34.807 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:34.807 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: 2s 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: ]] 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzkzZjQyOTc4ZGZjZTFmMjFkNDBiYjlhMTFlNDk4ZGEyYzhkZTFmM2MyMjc4NGRiB0PLcw==: 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:35.066 00:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:36.990 00:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:37.927 nvme0n1 00:13:37.927 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:37.927 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.927 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.927 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.927 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:37.927 00:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:38.861 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:38.861 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.861 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:39.119 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.119 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:39.119 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.119 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.119 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.119 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:39.119 00:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:39.377 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:39.377 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.377 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:39.635 00:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:40.200 request: 00:13:40.200 { 00:13:40.200 "name": "nvme0", 00:13:40.200 "dhchap_key": "key1", 00:13:40.200 "dhchap_ctrlr_key": "key3", 00:13:40.200 "method": "bdev_nvme_set_keys", 00:13:40.200 "req_id": 1 00:13:40.200 } 00:13:40.200 Got JSON-RPC error response 00:13:40.200 response: 00:13:40.200 { 00:13:40.200 "code": -13, 00:13:40.200 "message": "Permission denied" 00:13:40.200 } 00:13:40.200 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:40.200 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.200 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.200 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.200 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:40.200 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.200 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:40.459 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:40.459 00:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:41.398 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:41.398 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:41.398 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:41.657 00:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:43.034 nvme0n1 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:43.034 00:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:43.292 request: 00:13:43.292 { 00:13:43.292 "name": "nvme0", 00:13:43.292 "dhchap_key": "key2", 00:13:43.292 "dhchap_ctrlr_key": "key0", 00:13:43.292 "method": "bdev_nvme_set_keys", 00:13:43.292 "req_id": 1 00:13:43.292 } 00:13:43.292 Got JSON-RPC error response 00:13:43.292 response: 00:13:43.292 { 00:13:43.292 "code": -13, 00:13:43.292 "message": "Permission denied" 00:13:43.292 } 00:13:43.292 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:43.292 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.292 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.292 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.292 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:43.292 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:43.292 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.550 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:43.550 00:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 79121 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79121 ']' 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79121 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79121 00:13:44.925 killing process with pid 79121 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:44.925 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:44.926 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79121' 00:13:44.926 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79121 00:13:44.926 00:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79121 00:13:45.184 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:45.184 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:45.184 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:45.184 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.184 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:45.184 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.184 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.184 rmmod nvme_tcp 00:13:45.184 rmmod nvme_fabrics 00:13:45.184 rmmod nvme_keyring 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 82145 ']' 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 82145 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82145 ']' 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82145 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82145 00:13:45.442 killing process with pid 82145 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82145' 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82145 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82145 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:45.442 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:45.443 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.3oO /tmp/spdk.key-sha256.Qit /tmp/spdk.key-sha384.Y05 /tmp/spdk.key-sha512.1nW /tmp/spdk.key-sha512.t5g /tmp/spdk.key-sha384.CJd /tmp/spdk.key-sha256.RLD '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:45.701 00:13:45.701 real 3m7.320s 00:13:45.701 user 7m28.912s 00:13:45.701 sys 0m29.197s 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.701 ************************************ 00:13:45.701 END TEST nvmf_auth_target 00:13:45.701 ************************************ 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.701 ************************************ 00:13:45.701 START TEST nvmf_bdevio_no_huge 00:13:45.701 ************************************ 00:13:45.701 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:45.961 * Looking for test storage... 00:13:45.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:45.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.961 --rc genhtml_branch_coverage=1 00:13:45.961 --rc genhtml_function_coverage=1 00:13:45.961 --rc genhtml_legend=1 00:13:45.961 --rc geninfo_all_blocks=1 00:13:45.961 --rc geninfo_unexecuted_blocks=1 00:13:45.961 00:13:45.961 ' 00:13:45.961 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:45.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.961 --rc genhtml_branch_coverage=1 00:13:45.961 --rc genhtml_function_coverage=1 00:13:45.961 --rc genhtml_legend=1 00:13:45.961 --rc geninfo_all_blocks=1 00:13:45.962 --rc geninfo_unexecuted_blocks=1 00:13:45.962 00:13:45.962 ' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:45.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.962 --rc genhtml_branch_coverage=1 00:13:45.962 --rc genhtml_function_coverage=1 00:13:45.962 --rc genhtml_legend=1 00:13:45.962 --rc geninfo_all_blocks=1 00:13:45.962 --rc geninfo_unexecuted_blocks=1 00:13:45.962 00:13:45.962 ' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:45.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.962 --rc genhtml_branch_coverage=1 00:13:45.962 --rc genhtml_function_coverage=1 00:13:45.962 --rc genhtml_legend=1 00:13:45.962 --rc geninfo_all_blocks=1 00:13:45.962 --rc geninfo_unexecuted_blocks=1 00:13:45.962 00:13:45.962 ' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.962 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:45.962 Cannot find device "nvmf_init_br" 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:45.962 Cannot find device "nvmf_init_br2" 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:45.962 Cannot find device "nvmf_tgt_br" 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.962 Cannot find device "nvmf_tgt_br2" 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:45.962 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:46.222 Cannot find device "nvmf_init_br" 00:13:46.222 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:46.222 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:46.222 Cannot find device "nvmf_init_br2" 00:13:46.222 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:46.222 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:46.222 Cannot find device "nvmf_tgt_br" 00:13:46.222 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:46.222 00:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:46.222 Cannot find device "nvmf_tgt_br2" 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:46.222 Cannot find device "nvmf_br" 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:46.222 Cannot find device "nvmf_init_if" 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:46.222 Cannot find device "nvmf_init_if2" 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:46.222 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:46.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:46.482 00:13:46.482 --- 10.0.0.3 ping statistics --- 00:13:46.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.482 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:46.482 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:46.482 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:13:46.482 00:13:46.482 --- 10.0.0.4 ping statistics --- 00:13:46.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.482 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:46.482 00:13:46.482 --- 10.0.0.1 ping statistics --- 00:13:46.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.482 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:46.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:13:46.482 00:13:46.482 --- 10.0.0.2 ping statistics --- 00:13:46.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.482 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=82777 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 82777 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82777 ']' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.482 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.482 [2024-12-17 00:29:32.401680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:46.482 [2024-12-17 00:29:32.401788] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:46.741 [2024-12-17 00:29:32.543267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.741 [2024-12-17 00:29:32.651787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.741 [2024-12-17 00:29:32.651850] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.741 [2024-12-17 00:29:32.651865] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.741 [2024-12-17 00:29:32.651875] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.741 [2024-12-17 00:29:32.651884] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.741 [2024-12-17 00:29:32.652011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:13:46.741 [2024-12-17 00:29:32.652187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:13:46.741 [2024-12-17 00:29:32.652289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:13:46.741 [2024-12-17 00:29:32.652295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.741 [2024-12-17 00:29:32.658836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.999 [2024-12-17 00:29:32.848715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.999 Malloc0 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.999 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 [2024-12-17 00:29:32.893047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:13:47.000 { 00:13:47.000 "params": { 00:13:47.000 "name": "Nvme$subsystem", 00:13:47.000 "trtype": "$TEST_TRANSPORT", 00:13:47.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.000 "adrfam": "ipv4", 00:13:47.000 "trsvcid": "$NVMF_PORT", 00:13:47.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.000 "hdgst": ${hdgst:-false}, 00:13:47.000 "ddgst": ${ddgst:-false} 00:13:47.000 }, 00:13:47.000 "method": "bdev_nvme_attach_controller" 00:13:47.000 } 00:13:47.000 EOF 00:13:47.000 )") 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:13:47.000 00:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:13:47.000 "params": { 00:13:47.000 "name": "Nvme1", 00:13:47.000 "trtype": "tcp", 00:13:47.000 "traddr": "10.0.0.3", 00:13:47.000 "adrfam": "ipv4", 00:13:47.000 "trsvcid": "4420", 00:13:47.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.000 "hdgst": false, 00:13:47.000 "ddgst": false 00:13:47.000 }, 00:13:47.000 "method": "bdev_nvme_attach_controller" 00:13:47.000 }' 00:13:47.000 [2024-12-17 00:29:32.960515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:47.000 [2024-12-17 00:29:32.960685] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82811 ] 00:13:47.271 [2024-12-17 00:29:33.108942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.271 [2024-12-17 00:29:33.242099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.271 [2024-12-17 00:29:33.242173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.271 [2024-12-17 00:29:33.242184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.271 [2024-12-17 00:29:33.257417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.545 I/O targets: 00:13:47.545 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:47.545 00:13:47.545 00:13:47.545 CUnit - A unit testing framework for C - Version 2.1-3 00:13:47.545 http://cunit.sourceforge.net/ 00:13:47.545 00:13:47.545 00:13:47.545 Suite: bdevio tests on: Nvme1n1 00:13:47.545 Test: blockdev write read block ...passed 00:13:47.545 Test: blockdev write zeroes read block ...passed 00:13:47.545 Test: blockdev write zeroes read no split ...passed 00:13:47.545 Test: blockdev write zeroes read split ...passed 00:13:47.545 Test: blockdev write zeroes read split partial ...passed 00:13:47.545 Test: blockdev reset ...[2024-12-17 00:29:33.473249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:47.545 [2024-12-17 00:29:33.473358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93f2d0 (9): Bad file descriptor 00:13:47.545 [2024-12-17 00:29:33.493650] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:47.545 passed 00:13:47.545 Test: blockdev write read 8 blocks ...passed 00:13:47.545 Test: blockdev write read size > 128k ...passed 00:13:47.545 Test: blockdev write read invalid size ...passed 00:13:47.545 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:47.545 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:47.545 Test: blockdev write read max offset ...passed 00:13:47.545 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:47.545 Test: blockdev writev readv 8 blocks ...passed 00:13:47.545 Test: blockdev writev readv 30 x 1block ...passed 00:13:47.545 Test: blockdev writev readv block ...passed 00:13:47.545 Test: blockdev writev readv size > 128k ...passed 00:13:47.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:47.545 Test: blockdev comparev and writev ...[2024-12-17 00:29:33.502768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.502968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.503115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.503237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.503685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.503893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.504071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.504259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.504836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.504977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.505121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.505227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.505654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.505759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.505867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:47.545 [2024-12-17 00:29:33.505985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:47.545 passed 00:13:47.545 Test: blockdev nvme passthru rw ...passed 00:13:47.545 Test: blockdev nvme passthru vendor specific ...[2024-12-17 00:29:33.506828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:47.545 [2024-12-17 00:29:33.506975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.507214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:47.545 [2024-12-17 00:29:33.507372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.507591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:47.545 [2024-12-17 00:29:33.507697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:47.545 [2024-12-17 00:29:33.507910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:47.545 [2024-12-17 00:29:33.508034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:47.545 passed 00:13:47.545 Test: blockdev nvme admin passthru ...passed 00:13:47.545 Test: blockdev copy ...passed 00:13:47.545 00:13:47.545 Run Summary: Type Total Ran Passed Failed Inactive 00:13:47.545 suites 1 1 n/a 0 0 00:13:47.545 tests 23 23 23 0 0 00:13:47.545 asserts 152 152 152 0 n/a 00:13:47.545 00:13:47.545 Elapsed time = 0.177 seconds 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.113 rmmod nvme_tcp 00:13:48.113 rmmod nvme_fabrics 00:13:48.113 rmmod nvme_keyring 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 82777 ']' 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 82777 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82777 ']' 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82777 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82777 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:48.113 killing process with pid 82777 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82777' 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82777 00:13:48.113 00:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82777 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:48.373 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.632 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:48.632 00:13:48.632 real 0m2.864s 00:13:48.632 user 0m7.755s 00:13:48.632 sys 0m1.374s 00:13:48.633 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.633 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:48.633 ************************************ 00:13:48.633 END TEST nvmf_bdevio_no_huge 00:13:48.633 ************************************ 00:13:48.633 00:29:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:48.633 00:29:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.633 00:29:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.633 00:29:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.633 ************************************ 00:13:48.633 START TEST nvmf_tls 00:13:48.633 ************************************ 00:13:48.633 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:48.892 * Looking for test storage... 00:13:48.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:48.892 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:48.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.893 --rc genhtml_branch_coverage=1 00:13:48.893 --rc genhtml_function_coverage=1 00:13:48.893 --rc genhtml_legend=1 00:13:48.893 --rc geninfo_all_blocks=1 00:13:48.893 --rc geninfo_unexecuted_blocks=1 00:13:48.893 00:13:48.893 ' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:48.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.893 --rc genhtml_branch_coverage=1 00:13:48.893 --rc genhtml_function_coverage=1 00:13:48.893 --rc genhtml_legend=1 00:13:48.893 --rc geninfo_all_blocks=1 00:13:48.893 --rc geninfo_unexecuted_blocks=1 00:13:48.893 00:13:48.893 ' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:48.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.893 --rc genhtml_branch_coverage=1 00:13:48.893 --rc genhtml_function_coverage=1 00:13:48.893 --rc genhtml_legend=1 00:13:48.893 --rc geninfo_all_blocks=1 00:13:48.893 --rc geninfo_unexecuted_blocks=1 00:13:48.893 00:13:48.893 ' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:48.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.893 --rc genhtml_branch_coverage=1 00:13:48.893 --rc genhtml_function_coverage=1 00:13:48.893 --rc genhtml_legend=1 00:13:48.893 --rc geninfo_all_blocks=1 00:13:48.893 --rc geninfo_unexecuted_blocks=1 00:13:48.893 00:13:48.893 ' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.893 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:48.893 Cannot find device "nvmf_init_br" 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:48.893 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:48.894 Cannot find device "nvmf_init_br2" 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:48.894 Cannot find device "nvmf_tgt_br" 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.894 Cannot find device "nvmf_tgt_br2" 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:48.894 Cannot find device "nvmf_init_br" 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:48.894 Cannot find device "nvmf_init_br2" 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:48.894 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:49.152 Cannot find device "nvmf_tgt_br" 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:49.153 Cannot find device "nvmf_tgt_br2" 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:49.153 Cannot find device "nvmf_br" 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:49.153 Cannot find device "nvmf_init_if" 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:49.153 Cannot find device "nvmf_init_if2" 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.153 00:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.153 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:49.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:13:49.411 00:13:49.411 --- 10.0.0.3 ping statistics --- 00:13:49.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.411 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:49.411 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:49.411 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:13:49.411 00:13:49.411 --- 10.0.0.4 ping statistics --- 00:13:49.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.411 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:49.411 00:13:49.411 --- 10.0.0.1 ping statistics --- 00:13:49.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.411 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:49.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:49.411 00:13:49.411 --- 10.0.0.2 ping statistics --- 00:13:49.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.411 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83048 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83048 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83048 ']' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.411 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.411 [2024-12-17 00:29:35.340906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:49.411 [2024-12-17 00:29:35.341526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.669 [2024-12-17 00:29:35.482043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.669 [2024-12-17 00:29:35.523638] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.669 [2024-12-17 00:29:35.523696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.669 [2024-12-17 00:29:35.523717] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.669 [2024-12-17 00:29:35.523733] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.669 [2024-12-17 00:29:35.523746] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.669 [2024-12-17 00:29:35.523799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:49.669 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:49.928 true 00:13:49.928 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:49.928 00:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:50.187 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:50.187 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:50.187 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:50.754 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:50.754 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.754 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:50.754 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:50.754 00:29:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:51.322 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:51.322 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.322 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:51.322 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:51.322 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:51.322 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.890 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:51.890 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:51.890 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:51.890 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.890 00:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:52.149 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:52.149 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:52.149 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:52.717 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:52.717 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZrkW4zZtlZ 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.bQTsQwCKsc 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZrkW4zZtlZ 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.bQTsQwCKsc 00:13:52.976 00:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:53.235 00:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:53.803 [2024-12-17 00:29:39.500193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:53.803 00:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZrkW4zZtlZ 00:13:53.803 00:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZrkW4zZtlZ 00:13:53.803 00:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:54.062 [2024-12-17 00:29:39.827154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.062 00:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.321 00:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:54.580 [2024-12-17 00:29:40.451400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.580 [2024-12-17 00:29:40.451625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:54.580 00:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:54.840 malloc0 00:13:54.840 00:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:55.408 00:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZrkW4zZtlZ 00:13:55.666 00:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:55.926 00:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZrkW4zZtlZ 00:14:05.907 Initializing NVMe Controllers 00:14:05.907 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.907 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:05.907 Initialization complete. Launching workers. 00:14:05.907 ======================================================== 00:14:05.907 Latency(us) 00:14:05.907 Device Information : IOPS MiB/s Average min max 00:14:05.907 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9966.66 38.93 6422.62 1877.08 13215.75 00:14:05.907 ======================================================== 00:14:05.907 Total : 9966.66 38.93 6422.62 1877.08 13215.75 00:14:05.907 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrkW4zZtlZ 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZrkW4zZtlZ 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83286 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83286 /var/tmp/bdevperf.sock 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83286 ']' 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.907 00:29:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.166 [2024-12-17 00:29:51.933820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:06.166 [2024-12-17 00:29:51.934116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83286 ] 00:14:06.166 [2024-12-17 00:29:52.075014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.166 [2024-12-17 00:29:52.116961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.166 [2024-12-17 00:29:52.150072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.425 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.425 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:06.425 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZrkW4zZtlZ 00:14:06.684 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:06.943 [2024-12-17 00:29:52.708155] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:06.943 TLSTESTn1 00:14:06.943 00:29:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:07.201 Running I/O for 10 seconds... 00:14:09.073 4313.00 IOPS, 16.85 MiB/s [2024-12-17T00:29:56.011Z] 4364.50 IOPS, 17.05 MiB/s [2024-12-17T00:29:57.389Z] 4325.33 IOPS, 16.90 MiB/s [2024-12-17T00:29:58.325Z] 4359.25 IOPS, 17.03 MiB/s [2024-12-17T00:29:59.261Z] 4366.20 IOPS, 17.06 MiB/s [2024-12-17T00:30:00.217Z] 4315.33 IOPS, 16.86 MiB/s [2024-12-17T00:30:01.154Z] 4260.57 IOPS, 16.64 MiB/s [2024-12-17T00:30:02.090Z] 4218.75 IOPS, 16.48 MiB/s [2024-12-17T00:30:03.026Z] 4172.67 IOPS, 16.30 MiB/s [2024-12-17T00:30:03.026Z] 4150.80 IOPS, 16.21 MiB/s 00:14:17.023 Latency(us) 00:14:17.023 [2024-12-17T00:30:03.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.023 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:17.023 Verification LBA range: start 0x0 length 0x2000 00:14:17.023 TLSTESTn1 : 10.02 4156.59 16.24 0.00 0.00 30739.80 5034.36 39559.91 00:14:17.023 [2024-12-17T00:30:03.026Z] =================================================================================================================== 00:14:17.023 [2024-12-17T00:30:03.026Z] Total : 4156.59 16.24 0.00 0.00 30739.80 5034.36 39559.91 00:14:17.023 { 00:14:17.023 "results": [ 00:14:17.023 { 00:14:17.023 "job": "TLSTESTn1", 00:14:17.023 "core_mask": "0x4", 00:14:17.023 "workload": "verify", 00:14:17.023 "status": "finished", 00:14:17.023 "verify_range": { 00:14:17.023 "start": 0, 00:14:17.023 "length": 8192 00:14:17.023 }, 00:14:17.023 "queue_depth": 128, 00:14:17.023 "io_size": 4096, 00:14:17.023 "runtime": 10.016154, 00:14:17.023 "iops": 4156.585451861064, 00:14:17.023 "mibps": 16.23666192133228, 00:14:17.023 "io_failed": 0, 00:14:17.023 "io_timeout": 0, 00:14:17.023 "avg_latency_us": 30739.80209300751, 00:14:17.023 "min_latency_us": 5034.356363636363, 00:14:17.023 "max_latency_us": 39559.91272727273 00:14:17.023 } 00:14:17.023 ], 00:14:17.023 "core_count": 1 00:14:17.023 } 00:14:17.023 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.023 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83286 00:14:17.023 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83286 ']' 00:14:17.023 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83286 00:14:17.023 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:17.023 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.023 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83286 00:14:17.282 killing process with pid 83286 00:14:17.282 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.282 00:14:17.282 Latency(us) 00:14:17.282 [2024-12-17T00:30:03.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.282 [2024-12-17T00:30:03.285Z] =================================================================================================================== 00:14:17.282 [2024-12-17T00:30:03.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83286' 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83286 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83286 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bQTsQwCKsc 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bQTsQwCKsc 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bQTsQwCKsc 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bQTsQwCKsc 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83419 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83419 /var/tmp/bdevperf.sock 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83419 ']' 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.282 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.283 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.283 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.283 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.283 [2024-12-17 00:30:03.253706] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:17.283 [2024-12-17 00:30:03.254086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83419 ] 00:14:17.542 [2024-12-17 00:30:03.393635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.542 [2024-12-17 00:30:03.431325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.542 [2024-12-17 00:30:03.463227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.542 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.542 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:17.542 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bQTsQwCKsc 00:14:18.110 00:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:18.110 [2024-12-17 00:30:04.112231] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.368 [2024-12-17 00:30:04.123864] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:18.368 [2024-12-17 00:30:04.123997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621d30 (107): Transport endpoint is not connected 00:14:18.368 [2024-12-17 00:30:04.124988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621d30 (9): Bad file descriptor 00:14:18.368 [2024-12-17 00:30:04.125985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:18.368 [2024-12-17 00:30:04.126010] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:18.368 [2024-12-17 00:30:04.126038] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:18.368 [2024-12-17 00:30:04.126048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:18.368 request: 00:14:18.368 { 00:14:18.368 "name": "TLSTEST", 00:14:18.368 "trtype": "tcp", 00:14:18.368 "traddr": "10.0.0.3", 00:14:18.368 "adrfam": "ipv4", 00:14:18.368 "trsvcid": "4420", 00:14:18.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.368 "prchk_reftag": false, 00:14:18.368 "prchk_guard": false, 00:14:18.368 "hdgst": false, 00:14:18.368 "ddgst": false, 00:14:18.368 "psk": "key0", 00:14:18.368 "allow_unrecognized_csi": false, 00:14:18.368 "method": "bdev_nvme_attach_controller", 00:14:18.368 "req_id": 1 00:14:18.368 } 00:14:18.368 Got JSON-RPC error response 00:14:18.368 response: 00:14:18.368 { 00:14:18.368 "code": -5, 00:14:18.368 "message": "Input/output error" 00:14:18.368 } 00:14:18.368 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83419 00:14:18.368 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83419 ']' 00:14:18.368 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83419 00:14:18.368 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:18.368 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.368 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83419 00:14:18.368 killing process with pid 83419 00:14:18.368 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.368 00:14:18.368 Latency(us) 00:14:18.368 [2024-12-17T00:30:04.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.368 [2024-12-17T00:30:04.371Z] =================================================================================================================== 00:14:18.368 [2024-12-17T00:30:04.371Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.368 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83419' 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83419 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83419 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZrkW4zZtlZ 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZrkW4zZtlZ 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:18.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZrkW4zZtlZ 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZrkW4zZtlZ 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83440 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83440 /var/tmp/bdevperf.sock 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83440 ']' 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.369 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.626 [2024-12-17 00:30:04.374223] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:18.626 [2024-12-17 00:30:04.374330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83440 ] 00:14:18.626 [2024-12-17 00:30:04.503864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.626 [2024-12-17 00:30:04.539955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.626 [2024-12-17 00:30:04.568075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.885 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.885 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:18.885 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZrkW4zZtlZ 00:14:19.144 00:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:19.403 [2024-12-17 00:30:05.232057] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.403 [2024-12-17 00:30:05.243877] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.403 [2024-12-17 00:30:05.244109] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.403 [2024-12-17 00:30:05.244488] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:19.403 [2024-12-17 00:30:05.244864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23d30 (107): Transport endpoint is not connected 00:14:19.403 [2024-12-17 00:30:05.245856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc23d30 (9): Bad file descriptor 00:14:19.403 [2024-12-17 00:30:05.246852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:19.403 [2024-12-17 00:30:05.247021] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:19.403 [2024-12-17 00:30:05.247233] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4request: 00:14:19.403 { 00:14:19.403 "name": "TLSTEST", 00:14:19.403 "trtype": "tcp", 00:14:19.403 "traddr": "10.0.0.3", 00:14:19.403 "adrfam": "ipv4", 00:14:19.403 "trsvcid": "4420", 00:14:19.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.403 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:19.403 "prchk_reftag": false, 00:14:19.403 "prchk_guard": false, 00:14:19.403 "hdgst": false, 00:14:19.403 "ddgst": false, 00:14:19.403 "psk": "key0", 00:14:19.403 "allow_unrecognized_csi": false, 00:14:19.403 "method": "bdev_nvme_attach_controller", 00:14:19.403 "req_id": 1 00:14:19.403 } 00:14:19.403 Got JSON-RPC error response 00:14:19.403 response: 00:14:19.403 { 00:14:19.403 "code": -5, 00:14:19.403 "message": "Input/output error" 00:14:19.403 } 00:14:19.403 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:19.403 [2024-12-17 00:30:05.247385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83440 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83440 ']' 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83440 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83440 00:14:19.403 killing process with pid 83440 00:14:19.403 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.403 00:14:19.403 Latency(us) 00:14:19.403 [2024-12-17T00:30:05.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.403 [2024-12-17T00:30:05.406Z] =================================================================================================================== 00:14:19.403 [2024-12-17T00:30:05.406Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83440' 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83440 00:14:19.403 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83440 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrkW4zZtlZ 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrkW4zZtlZ 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZrkW4zZtlZ 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZrkW4zZtlZ 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83461 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83461 /var/tmp/bdevperf.sock 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83461 ']' 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.662 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:19.662 [2024-12-17 00:30:05.477426] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:19.662 [2024-12-17 00:30:05.477517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83461 ] 00:14:19.662 [2024-12-17 00:30:05.614179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.662 [2024-12-17 00:30:05.655255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.922 [2024-12-17 00:30:05.687984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.922 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.922 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:19.922 00:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZrkW4zZtlZ 00:14:20.181 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.440 [2024-12-17 00:30:06.321218] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.440 [2024-12-17 00:30:06.326198] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:20.440 [2024-12-17 00:30:06.326470] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:20.440 [2024-12-17 00:30:06.326659] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:20.440 [2024-12-17 00:30:06.326949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e1d30 (107): Transport endpoint is not connected 00:14:20.440 [2024-12-17 00:30:06.327953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e1d30 (9): Bad file descriptor 00:14:20.440 [2024-12-17 00:30:06.328934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:20.440 [2024-12-17 00:30:06.329106] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:20.440 [2024-12-17 00:30:06.329261] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:20.440 [2024-12-17 00:30:06.329436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:20.440 request: 00:14:20.440 { 00:14:20.440 "name": "TLSTEST", 00:14:20.440 "trtype": "tcp", 00:14:20.440 "traddr": "10.0.0.3", 00:14:20.440 "adrfam": "ipv4", 00:14:20.440 "trsvcid": "4420", 00:14:20.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:20.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.440 "prchk_reftag": false, 00:14:20.440 "prchk_guard": false, 00:14:20.440 "hdgst": false, 00:14:20.440 "ddgst": false, 00:14:20.440 "psk": "key0", 00:14:20.440 "allow_unrecognized_csi": false, 00:14:20.440 "method": "bdev_nvme_attach_controller", 00:14:20.440 "req_id": 1 00:14:20.440 } 00:14:20.440 Got JSON-RPC error response 00:14:20.440 response: 00:14:20.440 { 00:14:20.440 "code": -5, 00:14:20.440 "message": "Input/output error" 00:14:20.440 } 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83461 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83461 ']' 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83461 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83461 00:14:20.440 killing process with pid 83461 00:14:20.440 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.440 00:14:20.440 Latency(us) 00:14:20.440 [2024-12-17T00:30:06.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.440 [2024-12-17T00:30:06.443Z] =================================================================================================================== 00:14:20.440 [2024-12-17T00:30:06.443Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83461' 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83461 00:14:20.440 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83461 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:20.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83482 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83482 /var/tmp/bdevperf.sock 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83482 ']' 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.699 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.699 [2024-12-17 00:30:06.557587] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:20.699 [2024-12-17 00:30:06.557941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83482 ] 00:14:20.699 [2024-12-17 00:30:06.693297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.958 [2024-12-17 00:30:06.734365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.958 [2024-12-17 00:30:06.767015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.958 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.958 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:20.958 00:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:21.216 [2024-12-17 00:30:07.148134] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:21.216 [2024-12-17 00:30:07.148503] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:21.216 request: 00:14:21.216 { 00:14:21.216 "name": "key0", 00:14:21.216 "path": "", 00:14:21.216 "method": "keyring_file_add_key", 00:14:21.216 "req_id": 1 00:14:21.216 } 00:14:21.216 Got JSON-RPC error response 00:14:21.216 response: 00:14:21.216 { 00:14:21.216 "code": -1, 00:14:21.216 "message": "Operation not permitted" 00:14:21.216 } 00:14:21.216 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:21.474 [2024-12-17 00:30:07.476343] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.474 [2024-12-17 00:30:07.476590] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:21.734 request: 00:14:21.734 { 00:14:21.734 "name": "TLSTEST", 00:14:21.734 "trtype": "tcp", 00:14:21.734 "traddr": "10.0.0.3", 00:14:21.734 "adrfam": "ipv4", 00:14:21.734 "trsvcid": "4420", 00:14:21.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.734 "prchk_reftag": false, 00:14:21.734 "prchk_guard": false, 00:14:21.734 "hdgst": false, 00:14:21.734 "ddgst": false, 00:14:21.734 "psk": "key0", 00:14:21.734 "allow_unrecognized_csi": false, 00:14:21.734 "method": "bdev_nvme_attach_controller", 00:14:21.734 "req_id": 1 00:14:21.734 } 00:14:21.734 Got JSON-RPC error response 00:14:21.734 response: 00:14:21.734 { 00:14:21.734 "code": -126, 00:14:21.734 "message": "Required key not available" 00:14:21.734 } 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83482 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83482 ']' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83482 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83482 00:14:21.734 killing process with pid 83482 00:14:21.734 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.734 00:14:21.734 Latency(us) 00:14:21.734 [2024-12-17T00:30:07.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.734 [2024-12-17T00:30:07.737Z] =================================================================================================================== 00:14:21.734 [2024-12-17T00:30:07.737Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83482' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83482 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83482 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83048 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83048 ']' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83048 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83048 00:14:21.734 killing process with pid 83048 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83048' 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83048 00:14:21.734 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83048 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.48HBuRedJV 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.48HBuRedJV 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83513 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83513 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83513 ']' 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.993 00:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.993 [2024-12-17 00:30:07.969621] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:21.993 [2024-12-17 00:30:07.969755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.252 [2024-12-17 00:30:08.112113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.252 [2024-12-17 00:30:08.151097] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.252 [2024-12-17 00:30:08.151159] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.252 [2024-12-17 00:30:08.151174] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.252 [2024-12-17 00:30:08.151186] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.252 [2024-12-17 00:30:08.151195] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.252 [2024-12-17 00:30:08.151223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.252 [2024-12-17 00:30:08.183644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.189 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.189 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:23.189 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:23.189 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.189 00:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.189 00:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.189 00:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.48HBuRedJV 00:14:23.189 00:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.48HBuRedJV 00:14:23.189 00:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:23.448 [2024-12-17 00:30:09.249733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.448 00:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:23.706 00:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:23.966 [2024-12-17 00:30:09.893911] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:23.966 [2024-12-17 00:30:09.894164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:23.966 00:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:24.224 malloc0 00:14:24.224 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:24.790 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:14:25.048 00:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.48HBuRedJV 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.48HBuRedJV 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83574 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83574 /var/tmp/bdevperf.sock 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83574 ']' 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.318 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.318 [2024-12-17 00:30:11.289697] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:25.318 [2024-12-17 00:30:11.290108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83574 ] 00:14:25.603 [2024-12-17 00:30:11.426770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.603 [2024-12-17 00:30:11.462365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.603 [2024-12-17 00:30:11.491207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.603 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.603 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:25.603 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:14:25.862 00:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:26.121 [2024-12-17 00:30:12.002955] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:26.121 TLSTESTn1 00:14:26.121 00:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:26.380 Running I/O for 10 seconds... 00:14:28.266 4430.00 IOPS, 17.30 MiB/s [2024-12-17T00:30:15.205Z] 4350.50 IOPS, 16.99 MiB/s [2024-12-17T00:30:16.580Z] 4361.67 IOPS, 17.04 MiB/s [2024-12-17T00:30:17.516Z] 4417.50 IOPS, 17.26 MiB/s [2024-12-17T00:30:18.453Z] 4395.60 IOPS, 17.17 MiB/s [2024-12-17T00:30:19.387Z] 4332.50 IOPS, 16.92 MiB/s [2024-12-17T00:30:20.322Z] 4283.86 IOPS, 16.73 MiB/s [2024-12-17T00:30:21.258Z] 4236.50 IOPS, 16.55 MiB/s [2024-12-17T00:30:22.635Z] 4243.78 IOPS, 16.58 MiB/s [2024-12-17T00:30:22.635Z] 4228.20 IOPS, 16.52 MiB/s 00:14:36.632 Latency(us) 00:14:36.632 [2024-12-17T00:30:22.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.632 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:36.632 Verification LBA range: start 0x0 length 0x2000 00:14:36.632 TLSTESTn1 : 10.02 4232.69 16.53 0.00 0.00 30179.71 7149.38 28120.90 00:14:36.632 [2024-12-17T00:30:22.635Z] =================================================================================================================== 00:14:36.632 [2024-12-17T00:30:22.635Z] Total : 4232.69 16.53 0.00 0.00 30179.71 7149.38 28120.90 00:14:36.632 { 00:14:36.632 "results": [ 00:14:36.632 { 00:14:36.632 "job": "TLSTESTn1", 00:14:36.632 "core_mask": "0x4", 00:14:36.632 "workload": "verify", 00:14:36.632 "status": "finished", 00:14:36.632 "verify_range": { 00:14:36.632 "start": 0, 00:14:36.632 "length": 8192 00:14:36.632 }, 00:14:36.632 "queue_depth": 128, 00:14:36.632 "io_size": 4096, 00:14:36.632 "runtime": 10.018917, 00:14:36.632 "iops": 4232.693014624236, 00:14:36.632 "mibps": 16.53395708837592, 00:14:36.632 "io_failed": 0, 00:14:36.632 "io_timeout": 0, 00:14:36.632 "avg_latency_us": 30179.706835020803, 00:14:36.632 "min_latency_us": 7149.381818181818, 00:14:36.632 "max_latency_us": 28120.901818181817 00:14:36.632 } 00:14:36.632 ], 00:14:36.632 "core_count": 1 00:14:36.632 } 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83574 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83574 ']' 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83574 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83574 00:14:36.632 killing process with pid 83574 00:14:36.632 Received shutdown signal, test time was about 10.000000 seconds 00:14:36.632 00:14:36.632 Latency(us) 00:14:36.632 [2024-12-17T00:30:22.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.632 [2024-12-17T00:30:22.635Z] =================================================================================================================== 00:14:36.632 [2024-12-17T00:30:22.635Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83574' 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83574 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83574 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.48HBuRedJV 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.48HBuRedJV 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.48HBuRedJV 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:36.632 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.48HBuRedJV 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.48HBuRedJV 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83705 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83705 /var/tmp/bdevperf.sock 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83705 ']' 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.633 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.633 [2024-12-17 00:30:22.489177] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:36.633 [2024-12-17 00:30:22.489476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83705 ] 00:14:36.633 [2024-12-17 00:30:22.625500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.892 [2024-12-17 00:30:22.664047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.892 [2024-12-17 00:30:22.694857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.892 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.892 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:36.892 00:30:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:14:37.198 [2024-12-17 00:30:23.018888] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.48HBuRedJV': 0100666 00:14:37.198 [2024-12-17 00:30:23.019162] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:37.198 request: 00:14:37.198 { 00:14:37.198 "name": "key0", 00:14:37.198 "path": "/tmp/tmp.48HBuRedJV", 00:14:37.198 "method": "keyring_file_add_key", 00:14:37.198 "req_id": 1 00:14:37.198 } 00:14:37.198 Got JSON-RPC error response 00:14:37.198 response: 00:14:37.198 { 00:14:37.198 "code": -1, 00:14:37.198 "message": "Operation not permitted" 00:14:37.198 } 00:14:37.198 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:37.456 [2024-12-17 00:30:23.427074] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.456 [2024-12-17 00:30:23.427147] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:37.456 request: 00:14:37.456 { 00:14:37.456 "name": "TLSTEST", 00:14:37.456 "trtype": "tcp", 00:14:37.456 "traddr": "10.0.0.3", 00:14:37.456 "adrfam": "ipv4", 00:14:37.456 "trsvcid": "4420", 00:14:37.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.456 "prchk_reftag": false, 00:14:37.456 "prchk_guard": false, 00:14:37.456 "hdgst": false, 00:14:37.456 "ddgst": false, 00:14:37.456 "psk": "key0", 00:14:37.456 "allow_unrecognized_csi": false, 00:14:37.456 "method": "bdev_nvme_attach_controller", 00:14:37.456 "req_id": 1 00:14:37.456 } 00:14:37.456 Got JSON-RPC error response 00:14:37.456 response: 00:14:37.456 { 00:14:37.456 "code": -126, 00:14:37.456 "message": "Required key not available" 00:14:37.456 } 00:14:37.456 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83705 00:14:37.456 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83705 ']' 00:14:37.456 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83705 00:14:37.456 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:37.456 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.456 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83705 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83705' 00:14:37.714 killing process with pid 83705 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83705 00:14:37.714 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.714 00:14:37.714 Latency(us) 00:14:37.714 [2024-12-17T00:30:23.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.714 [2024-12-17T00:30:23.717Z] =================================================================================================================== 00:14:37.714 [2024-12-17T00:30:23.717Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83705 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83513 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83513 ']' 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83513 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83513 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:37.714 killing process with pid 83513 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83513' 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83513 00:14:37.714 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83513 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83737 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83737 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83737 ']' 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.972 00:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.972 [2024-12-17 00:30:23.898484] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:37.972 [2024-12-17 00:30:23.899414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.231 [2024-12-17 00:30:24.043995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.231 [2024-12-17 00:30:24.080179] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.231 [2024-12-17 00:30:24.080262] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.231 [2024-12-17 00:30:24.080285] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.231 [2024-12-17 00:30:24.080301] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.231 [2024-12-17 00:30:24.080338] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.231 [2024-12-17 00:30:24.080392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.231 [2024-12-17 00:30:24.115746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.231 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.231 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:38.231 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:38.231 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.231 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.48HBuRedJV 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.48HBuRedJV 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.48HBuRedJV 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.48HBuRedJV 00:14:38.489 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:38.747 [2024-12-17 00:30:24.611010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.747 00:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:39.312 00:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:39.569 [2024-12-17 00:30:25.367384] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:39.569 [2024-12-17 00:30:25.367693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.569 00:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:39.827 malloc0 00:14:39.827 00:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:40.085 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:14:40.650 [2024-12-17 00:30:26.363935] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.48HBuRedJV': 0100666 00:14:40.650 [2024-12-17 00:30:26.363988] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:40.650 request: 00:14:40.650 { 00:14:40.650 "name": "key0", 00:14:40.650 "path": "/tmp/tmp.48HBuRedJV", 00:14:40.650 "method": "keyring_file_add_key", 00:14:40.650 "req_id": 1 00:14:40.650 } 00:14:40.650 Got JSON-RPC error response 00:14:40.650 response: 00:14:40.650 { 00:14:40.650 "code": -1, 00:14:40.650 "message": "Operation not permitted" 00:14:40.650 } 00:14:40.650 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.650 [2024-12-17 00:30:26.628059] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:40.650 [2024-12-17 00:30:26.628197] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:40.650 request: 00:14:40.650 { 00:14:40.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.650 "host": "nqn.2016-06.io.spdk:host1", 00:14:40.650 "psk": "key0", 00:14:40.650 "method": "nvmf_subsystem_add_host", 00:14:40.650 "req_id": 1 00:14:40.650 } 00:14:40.650 Got JSON-RPC error response 00:14:40.650 response: 00:14:40.650 { 00:14:40.650 "code": -32603, 00:14:40.650 "message": "Internal error" 00:14:40.650 } 00:14:40.650 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:40.650 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.650 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.650 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.650 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83737 00:14:40.651 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83737 ']' 00:14:40.651 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83737 00:14:40.651 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83737 00:14:40.909 killing process with pid 83737 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83737' 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83737 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83737 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.48HBuRedJV 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83804 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83804 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83804 ']' 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.909 00:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.909 [2024-12-17 00:30:26.893546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:40.909 [2024-12-17 00:30:26.893618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.167 [2024-12-17 00:30:27.028993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.167 [2024-12-17 00:30:27.062215] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.167 [2024-12-17 00:30:27.062520] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.167 [2024-12-17 00:30:27.062558] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.167 [2024-12-17 00:30:27.062566] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.167 [2024-12-17 00:30:27.062573] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.167 [2024-12-17 00:30:27.062603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.167 [2024-12-17 00:30:27.090685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.167 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.167 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:41.168 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:41.168 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.168 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.425 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.426 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.48HBuRedJV 00:14:41.426 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.48HBuRedJV 00:14:41.426 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:41.426 [2024-12-17 00:30:27.415159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.683 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:41.941 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:42.199 [2024-12-17 00:30:27.947312] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:42.199 [2024-12-17 00:30:27.947561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:42.199 00:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:42.457 malloc0 00:14:42.457 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:42.716 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83848 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83848 /var/tmp/bdevperf.sock 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83848 ']' 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.974 00:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.233 [2024-12-17 00:30:29.030328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:43.233 [2024-12-17 00:30:29.030647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83848 ] 00:14:43.233 [2024-12-17 00:30:29.169672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.233 [2024-12-17 00:30:29.211695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.491 [2024-12-17 00:30:29.245752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.059 00:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.059 00:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:44.059 00:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:14:44.317 00:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:44.575 [2024-12-17 00:30:30.435673] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.575 TLSTESTn1 00:14:44.575 00:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:45.141 00:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:45.141 "subsystems": [ 00:14:45.141 { 00:14:45.141 "subsystem": "keyring", 00:14:45.141 "config": [ 00:14:45.141 { 00:14:45.141 "method": "keyring_file_add_key", 00:14:45.141 "params": { 00:14:45.141 "name": "key0", 00:14:45.141 "path": "/tmp/tmp.48HBuRedJV" 00:14:45.141 } 00:14:45.141 } 00:14:45.141 ] 00:14:45.141 }, 00:14:45.141 { 00:14:45.141 "subsystem": "iobuf", 00:14:45.141 "config": [ 00:14:45.141 { 00:14:45.141 "method": "iobuf_set_options", 00:14:45.141 "params": { 00:14:45.141 "small_pool_count": 8192, 00:14:45.141 "large_pool_count": 1024, 00:14:45.141 "small_bufsize": 8192, 00:14:45.141 "large_bufsize": 135168 00:14:45.141 } 00:14:45.141 } 00:14:45.141 ] 00:14:45.141 }, 00:14:45.141 { 00:14:45.141 "subsystem": "sock", 00:14:45.141 "config": [ 00:14:45.141 { 00:14:45.141 "method": "sock_set_default_impl", 00:14:45.141 "params": { 00:14:45.141 "impl_name": "uring" 00:14:45.141 } 00:14:45.141 }, 00:14:45.141 { 00:14:45.141 "method": "sock_impl_set_options", 00:14:45.141 "params": { 00:14:45.141 "impl_name": "ssl", 00:14:45.141 "recv_buf_size": 4096, 00:14:45.141 "send_buf_size": 4096, 00:14:45.141 "enable_recv_pipe": true, 00:14:45.141 "enable_quickack": false, 00:14:45.141 "enable_placement_id": 0, 00:14:45.141 "enable_zerocopy_send_server": true, 00:14:45.141 "enable_zerocopy_send_client": false, 00:14:45.141 "zerocopy_threshold": 0, 00:14:45.141 "tls_version": 0, 00:14:45.141 "enable_ktls": false 00:14:45.141 } 00:14:45.141 }, 00:14:45.141 { 00:14:45.141 "method": "sock_impl_set_options", 00:14:45.141 "params": { 00:14:45.141 "impl_name": "posix", 00:14:45.141 "recv_buf_size": 2097152, 00:14:45.141 "send_buf_size": 2097152, 00:14:45.141 "enable_recv_pipe": true, 00:14:45.141 "enable_quickack": false, 00:14:45.141 "enable_placement_id": 0, 00:14:45.141 "enable_zerocopy_send_server": true, 00:14:45.141 "enable_zerocopy_send_client": false, 00:14:45.141 "zerocopy_threshold": 0, 00:14:45.141 "tls_version": 0, 00:14:45.141 "enable_ktls": false 00:14:45.141 } 00:14:45.141 }, 00:14:45.141 { 00:14:45.141 "method": "sock_impl_set_options", 00:14:45.141 "params": { 00:14:45.141 "impl_name": "uring", 00:14:45.141 "recv_buf_size": 2097152, 00:14:45.141 "send_buf_size": 2097152, 00:14:45.141 "enable_recv_pipe": true, 00:14:45.141 "enable_quickack": false, 00:14:45.141 "enable_placement_id": 0, 00:14:45.141 "enable_zerocopy_send_server": false, 00:14:45.141 "enable_zerocopy_send_client": false, 00:14:45.141 "zerocopy_threshold": 0, 00:14:45.141 "tls_version": 0, 00:14:45.141 "enable_ktls": false 00:14:45.141 } 00:14:45.141 } 00:14:45.141 ] 00:14:45.141 }, 00:14:45.141 { 00:14:45.141 "subsystem": "vmd", 00:14:45.141 "config": [] 00:14:45.141 }, 00:14:45.141 { 00:14:45.141 "subsystem": "accel", 00:14:45.141 "config": [ 00:14:45.141 { 00:14:45.141 "method": "accel_set_options", 00:14:45.141 "params": { 00:14:45.141 "small_cache_size": 128, 00:14:45.142 "large_cache_size": 16, 00:14:45.142 "task_count": 2048, 00:14:45.142 "sequence_count": 2048, 00:14:45.142 "buf_count": 2048 00:14:45.142 } 00:14:45.142 } 00:14:45.142 ] 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "subsystem": "bdev", 00:14:45.142 "config": [ 00:14:45.142 { 00:14:45.142 "method": "bdev_set_options", 00:14:45.142 "params": { 00:14:45.142 "bdev_io_pool_size": 65535, 00:14:45.142 "bdev_io_cache_size": 256, 00:14:45.142 "bdev_auto_examine": true, 00:14:45.142 "iobuf_small_cache_size": 128, 00:14:45.142 "iobuf_large_cache_size": 16 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "bdev_raid_set_options", 00:14:45.142 "params": { 00:14:45.142 "process_window_size_kb": 1024, 00:14:45.142 "process_max_bandwidth_mb_sec": 0 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "bdev_iscsi_set_options", 00:14:45.142 "params": { 00:14:45.142 "timeout_sec": 30 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "bdev_nvme_set_options", 00:14:45.142 "params": { 00:14:45.142 "action_on_timeout": "none", 00:14:45.142 "timeout_us": 0, 00:14:45.142 "timeout_admin_us": 0, 00:14:45.142 "keep_alive_timeout_ms": 10000, 00:14:45.142 "arbitration_burst": 0, 00:14:45.142 "low_priority_weight": 0, 00:14:45.142 "medium_priority_weight": 0, 00:14:45.142 "high_priority_weight": 0, 00:14:45.142 "nvme_adminq_poll_period_us": 10000, 00:14:45.142 "nvme_ioq_poll_period_us": 0, 00:14:45.142 "io_queue_requests": 0, 00:14:45.142 "delay_cmd_submit": true, 00:14:45.142 "transport_retry_count": 4, 00:14:45.142 "bdev_retry_count": 3, 00:14:45.142 "transport_ack_timeout": 0, 00:14:45.142 "ctrlr_loss_timeout_sec": 0, 00:14:45.142 "reconnect_delay_sec": 0, 00:14:45.142 "fast_io_fail_timeout_sec": 0, 00:14:45.142 "disable_auto_failback": false, 00:14:45.142 "generate_uuids": false, 00:14:45.142 "transport_tos": 0, 00:14:45.142 "nvme_error_stat": false, 00:14:45.142 "rdma_srq_size": 0, 00:14:45.142 "io_path_stat": false, 00:14:45.142 "allow_accel_sequence": false, 00:14:45.142 "rdma_max_cq_size": 0, 00:14:45.142 "rdma_cm_event_timeout_ms": 0, 00:14:45.142 "dhchap_digests": [ 00:14:45.142 "sha256", 00:14:45.142 "sha384", 00:14:45.142 "sha512" 00:14:45.142 ], 00:14:45.142 "dhchap_dhgroups": [ 00:14:45.142 "null", 00:14:45.142 "ffdhe2048", 00:14:45.142 "ffdhe3072", 00:14:45.142 "ffdhe4096", 00:14:45.142 "ffdhe6144", 00:14:45.142 "ffdhe8192" 00:14:45.142 ] 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "bdev_nvme_set_hotplug", 00:14:45.142 "params": { 00:14:45.142 "period_us": 100000, 00:14:45.142 "enable": false 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "bdev_malloc_create", 00:14:45.142 "params": { 00:14:45.142 "name": "malloc0", 00:14:45.142 "num_blocks": 8192, 00:14:45.142 "block_size": 4096, 00:14:45.142 "physical_block_size": 4096, 00:14:45.142 "uuid": "a6bc8fbe-4246-4a21-af96-21f411dd3b72", 00:14:45.142 "optimal_io_boundary": 0, 00:14:45.142 "md_size": 0, 00:14:45.142 "dif_type": 0, 00:14:45.142 "dif_is_head_of_md": false, 00:14:45.142 "dif_pi_format": 0 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "bdev_wait_for_examine" 00:14:45.142 } 00:14:45.142 ] 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "subsystem": "nbd", 00:14:45.142 "config": [] 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "subsystem": "scheduler", 00:14:45.142 "config": [ 00:14:45.142 { 00:14:45.142 "method": "framework_set_scheduler", 00:14:45.142 "params": { 00:14:45.142 "name": "static" 00:14:45.142 } 00:14:45.142 } 00:14:45.142 ] 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "subsystem": "nvmf", 00:14:45.142 "config": [ 00:14:45.142 { 00:14:45.142 "method": "nvmf_set_config", 00:14:45.142 "params": { 00:14:45.142 "discovery_filter": "match_any", 00:14:45.142 "admin_cmd_passthru": { 00:14:45.142 "identify_ctrlr": false 00:14:45.142 }, 00:14:45.142 "dhchap_digests": [ 00:14:45.142 "sha256", 00:14:45.142 "sha384", 00:14:45.142 "sha512" 00:14:45.142 ], 00:14:45.142 "dhchap_dhgroups": [ 00:14:45.142 "null", 00:14:45.142 "ffdhe2048", 00:14:45.142 "ffdhe3072", 00:14:45.142 "ffdhe4096", 00:14:45.142 "ffdhe6144", 00:14:45.142 "ffdhe8192" 00:14:45.142 ] 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "nvmf_set_max_subsystems", 00:14:45.142 "params": { 00:14:45.142 "max_subsystems": 1024 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "nvmf_set_crdt", 00:14:45.142 "params": { 00:14:45.142 "crdt1": 0, 00:14:45.142 "crdt2": 0, 00:14:45.142 "crdt3": 0 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "nvmf_create_transport", 00:14:45.142 "params": { 00:14:45.142 "trtype": "TCP", 00:14:45.142 "max_queue_depth": 128, 00:14:45.142 "max_io_qpairs_per_ctrlr": 127, 00:14:45.142 "in_capsule_data_size": 4096, 00:14:45.142 "max_io_size": 131072, 00:14:45.142 "io_unit_size": 131072, 00:14:45.142 "max_aq_depth": 128, 00:14:45.142 "num_shared_buffers": 511, 00:14:45.142 "buf_cache_size": 4294967295, 00:14:45.142 "dif_insert_or_strip": false, 00:14:45.142 "zcopy": false, 00:14:45.142 "c2h_success": false, 00:14:45.142 "sock_priority": 0, 00:14:45.142 "abort_timeout_sec": 1, 00:14:45.142 "ack_timeout": 0, 00:14:45.142 "data_wr_pool_size": 0 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "nvmf_create_subsystem", 00:14:45.142 "params": { 00:14:45.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.142 "allow_any_host": false, 00:14:45.142 "serial_number": "SPDK00000000000001", 00:14:45.142 "model_number": "SPDK bdev Controller", 00:14:45.142 "max_namespaces": 10, 00:14:45.142 "min_cntlid": 1, 00:14:45.142 "max_cntlid": 65519, 00:14:45.142 "ana_reporting": false 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "nvmf_subsystem_add_host", 00:14:45.142 "params": { 00:14:45.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.142 "host": "nqn.2016-06.io.spdk:host1", 00:14:45.142 "psk": "key0" 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "nvmf_subsystem_add_ns", 00:14:45.142 "params": { 00:14:45.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.142 "namespace": { 00:14:45.142 "nsid": 1, 00:14:45.142 "bdev_name": "malloc0", 00:14:45.142 "nguid": "A6BC8FBE42464A21AF9621F411DD3B72", 00:14:45.142 "uuid": "a6bc8fbe-4246-4a21-af96-21f411dd3b72", 00:14:45.142 "no_auto_visible": false 00:14:45.142 } 00:14:45.142 } 00:14:45.142 }, 00:14:45.142 { 00:14:45.142 "method": "nvmf_subsystem_add_listener", 00:14:45.142 "params": { 00:14:45.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.142 "listen_address": { 00:14:45.142 "trtype": "TCP", 00:14:45.142 "adrfam": "IPv4", 00:14:45.142 "traddr": "10.0.0.3", 00:14:45.142 "trsvcid": "4420" 00:14:45.142 }, 00:14:45.142 "secure_channel": true 00:14:45.142 } 00:14:45.142 } 00:14:45.142 ] 00:14:45.142 } 00:14:45.142 ] 00:14:45.142 }' 00:14:45.142 00:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:45.402 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:45.402 "subsystems": [ 00:14:45.402 { 00:14:45.402 "subsystem": "keyring", 00:14:45.402 "config": [ 00:14:45.402 { 00:14:45.402 "method": "keyring_file_add_key", 00:14:45.402 "params": { 00:14:45.402 "name": "key0", 00:14:45.402 "path": "/tmp/tmp.48HBuRedJV" 00:14:45.402 } 00:14:45.402 } 00:14:45.402 ] 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "subsystem": "iobuf", 00:14:45.402 "config": [ 00:14:45.402 { 00:14:45.402 "method": "iobuf_set_options", 00:14:45.402 "params": { 00:14:45.402 "small_pool_count": 8192, 00:14:45.402 "large_pool_count": 1024, 00:14:45.402 "small_bufsize": 8192, 00:14:45.402 "large_bufsize": 135168 00:14:45.402 } 00:14:45.402 } 00:14:45.402 ] 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "subsystem": "sock", 00:14:45.402 "config": [ 00:14:45.402 { 00:14:45.402 "method": "sock_set_default_impl", 00:14:45.402 "params": { 00:14:45.402 "impl_name": "uring" 00:14:45.402 } 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "method": "sock_impl_set_options", 00:14:45.402 "params": { 00:14:45.402 "impl_name": "ssl", 00:14:45.402 "recv_buf_size": 4096, 00:14:45.402 "send_buf_size": 4096, 00:14:45.402 "enable_recv_pipe": true, 00:14:45.402 "enable_quickack": false, 00:14:45.402 "enable_placement_id": 0, 00:14:45.402 "enable_zerocopy_send_server": true, 00:14:45.402 "enable_zerocopy_send_client": false, 00:14:45.402 "zerocopy_threshold": 0, 00:14:45.402 "tls_version": 0, 00:14:45.402 "enable_ktls": false 00:14:45.402 } 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "method": "sock_impl_set_options", 00:14:45.402 "params": { 00:14:45.402 "impl_name": "posix", 00:14:45.402 "recv_buf_size": 2097152, 00:14:45.402 "send_buf_size": 2097152, 00:14:45.402 "enable_recv_pipe": true, 00:14:45.402 "enable_quickack": false, 00:14:45.402 "enable_placement_id": 0, 00:14:45.402 "enable_zerocopy_send_server": true, 00:14:45.402 "enable_zerocopy_send_client": false, 00:14:45.402 "zerocopy_threshold": 0, 00:14:45.402 "tls_version": 0, 00:14:45.402 "enable_ktls": false 00:14:45.402 } 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "method": "sock_impl_set_options", 00:14:45.402 "params": { 00:14:45.402 "impl_name": "uring", 00:14:45.402 "recv_buf_size": 2097152, 00:14:45.402 "send_buf_size": 2097152, 00:14:45.402 "enable_recv_pipe": true, 00:14:45.402 "enable_quickack": false, 00:14:45.402 "enable_placement_id": 0, 00:14:45.402 "enable_zerocopy_send_server": false, 00:14:45.402 "enable_zerocopy_send_client": false, 00:14:45.402 "zerocopy_threshold": 0, 00:14:45.402 "tls_version": 0, 00:14:45.402 "enable_ktls": false 00:14:45.402 } 00:14:45.402 } 00:14:45.402 ] 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "subsystem": "vmd", 00:14:45.402 "config": [] 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "subsystem": "accel", 00:14:45.402 "config": [ 00:14:45.402 { 00:14:45.402 "method": "accel_set_options", 00:14:45.402 "params": { 00:14:45.402 "small_cache_size": 128, 00:14:45.402 "large_cache_size": 16, 00:14:45.402 "task_count": 2048, 00:14:45.402 "sequence_count": 2048, 00:14:45.402 "buf_count": 2048 00:14:45.402 } 00:14:45.402 } 00:14:45.402 ] 00:14:45.402 }, 00:14:45.402 { 00:14:45.402 "subsystem": "bdev", 00:14:45.402 "config": [ 00:14:45.402 { 00:14:45.402 "method": "bdev_set_options", 00:14:45.402 "params": { 00:14:45.402 "bdev_io_pool_size": 65535, 00:14:45.402 "bdev_io_cache_size": 256, 00:14:45.402 "bdev_auto_examine": true, 00:14:45.402 "iobuf_small_cache_size": 128, 00:14:45.403 "iobuf_large_cache_size": 16 00:14:45.403 } 00:14:45.403 }, 00:14:45.403 { 00:14:45.403 "method": "bdev_raid_set_options", 00:14:45.403 "params": { 00:14:45.403 "process_window_size_kb": 1024, 00:14:45.403 "process_max_bandwidth_mb_sec": 0 00:14:45.403 } 00:14:45.403 }, 00:14:45.403 { 00:14:45.403 "method": "bdev_iscsi_set_options", 00:14:45.403 "params": { 00:14:45.403 "timeout_sec": 30 00:14:45.403 } 00:14:45.403 }, 00:14:45.403 { 00:14:45.403 "method": "bdev_nvme_set_options", 00:14:45.403 "params": { 00:14:45.403 "action_on_timeout": "none", 00:14:45.403 "timeout_us": 0, 00:14:45.403 "timeout_admin_us": 0, 00:14:45.403 "keep_alive_timeout_ms": 10000, 00:14:45.403 "arbitration_burst": 0, 00:14:45.403 "low_priority_weight": 0, 00:14:45.403 "medium_priority_weight": 0, 00:14:45.403 "high_priority_weight": 0, 00:14:45.403 "nvme_adminq_poll_period_us": 10000, 00:14:45.403 "nvme_ioq_poll_period_us": 0, 00:14:45.403 "io_queue_requests": 512, 00:14:45.403 "delay_cmd_submit": true, 00:14:45.403 "transport_retry_count": 4, 00:14:45.403 "bdev_retry_count": 3, 00:14:45.403 "transport_ack_timeout": 0, 00:14:45.403 "ctrlr_loss_timeout_sec": 0, 00:14:45.403 "reconnect_delay_sec": 0, 00:14:45.403 "fast_io_fail_timeout_sec": 0, 00:14:45.403 "disable_auto_failback": false, 00:14:45.403 "generate_uuids": false, 00:14:45.403 "transport_tos": 0, 00:14:45.403 "nvme_error_stat": false, 00:14:45.403 "rdma_srq_size": 0, 00:14:45.403 "io_path_stat": false, 00:14:45.403 "allow_accel_sequence": false, 00:14:45.403 "rdma_max_cq_size": 0, 00:14:45.403 "rdma_cm_event_timeout_ms": 0, 00:14:45.403 "dhchap_digests": [ 00:14:45.403 "sha256", 00:14:45.403 "sha384", 00:14:45.403 "sha512" 00:14:45.403 ], 00:14:45.403 "dhchap_dhgroups": [ 00:14:45.403 "null", 00:14:45.403 "ffdhe2048", 00:14:45.403 "ffdhe3072", 00:14:45.403 "ffdhe4096", 00:14:45.403 "ffdhe6144", 00:14:45.403 "ffdhe8192" 00:14:45.403 ] 00:14:45.403 } 00:14:45.403 }, 00:14:45.403 { 00:14:45.403 "method": "bdev_nvme_attach_controller", 00:14:45.403 "params": { 00:14:45.403 "name": "TLSTEST", 00:14:45.403 "trtype": "TCP", 00:14:45.403 "adrfam": "IPv4", 00:14:45.403 "traddr": "10.0.0.3", 00:14:45.403 "trsvcid": "4420", 00:14:45.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.403 "prchk_reftag": false, 00:14:45.403 "prchk_guard": false, 00:14:45.403 "ctrlr_loss_timeout_sec": 0, 00:14:45.403 "reconnect_delay_sec": 0, 00:14:45.403 "fast_io_fail_timeout_sec": 0, 00:14:45.403 "psk": "key0", 00:14:45.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.403 "hdgst": false, 00:14:45.403 "ddgst": false 00:14:45.403 } 00:14:45.403 }, 00:14:45.403 { 00:14:45.403 "method": "bdev_nvme_set_hotplug", 00:14:45.403 "params": { 00:14:45.403 "period_us": 100000, 00:14:45.403 "enable": false 00:14:45.403 } 00:14:45.403 }, 00:14:45.403 { 00:14:45.403 "method": "bdev_wait_for_examine" 00:14:45.403 } 00:14:45.403 ] 00:14:45.403 }, 00:14:45.403 { 00:14:45.403 "subsystem": "nbd", 00:14:45.403 "config": [] 00:14:45.403 } 00:14:45.403 ] 00:14:45.403 }' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83848 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83848 ']' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83848 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83848 00:14:45.403 killing process with pid 83848 00:14:45.403 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.403 00:14:45.403 Latency(us) 00:14:45.403 [2024-12-17T00:30:31.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.403 [2024-12-17T00:30:31.406Z] =================================================================================================================== 00:14:45.403 [2024-12-17T00:30:31.406Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83848' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83848 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83848 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83804 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83804 ']' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83804 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83804 00:14:45.403 killing process with pid 83804 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83804' 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83804 00:14:45.403 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83804 00:14:45.663 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:45.663 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:45.663 "subsystems": [ 00:14:45.663 { 00:14:45.663 "subsystem": "keyring", 00:14:45.663 "config": [ 00:14:45.663 { 00:14:45.663 "method": "keyring_file_add_key", 00:14:45.663 "params": { 00:14:45.663 "name": "key0", 00:14:45.663 "path": "/tmp/tmp.48HBuRedJV" 00:14:45.663 } 00:14:45.663 } 00:14:45.663 ] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "iobuf", 00:14:45.663 "config": [ 00:14:45.663 { 00:14:45.663 "method": "iobuf_set_options", 00:14:45.663 "params": { 00:14:45.663 "small_pool_count": 8192, 00:14:45.663 "large_pool_count": 1024, 00:14:45.663 "small_bufsize": 8192, 00:14:45.663 "large_bufsize": 135168 00:14:45.663 } 00:14:45.663 } 00:14:45.663 ] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "sock", 00:14:45.663 "config": [ 00:14:45.663 { 00:14:45.663 "method": "sock_set_default_impl", 00:14:45.663 "params": { 00:14:45.663 "impl_name": "uring" 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "sock_impl_set_options", 00:14:45.663 "params": { 00:14:45.663 "impl_name": "ssl", 00:14:45.663 "recv_buf_size": 4096, 00:14:45.663 "send_buf_size": 4096, 00:14:45.663 "enable_recv_pipe": true, 00:14:45.663 "enable_quickack": false, 00:14:45.663 "enable_placement_id": 0, 00:14:45.663 "enable_zerocopy_send_server": true, 00:14:45.663 "enable_zerocopy_send_client": false, 00:14:45.663 "zerocopy_threshold": 0, 00:14:45.663 "tls_version": 0, 00:14:45.663 "enable_ktls": false 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "sock_impl_set_options", 00:14:45.663 "params": { 00:14:45.663 "impl_name": "posix", 00:14:45.663 "recv_buf_size": 2097152, 00:14:45.663 "send_buf_size": 2097152, 00:14:45.663 "enable_recv_pipe": true, 00:14:45.663 "enable_quickack": false, 00:14:45.663 "enable_placement_id": 0, 00:14:45.663 "enable_zerocopy_send_server": true, 00:14:45.663 "enable_zerocopy_send_client": false, 00:14:45.663 "zerocopy_threshold": 0, 00:14:45.663 "tls_version": 0, 00:14:45.663 "enable_ktls": false 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "sock_impl_set_options", 00:14:45.663 "params": { 00:14:45.663 "impl_name": "uring", 00:14:45.663 "recv_buf_size": 2097152, 00:14:45.663 "send_buf_size": 2097152, 00:14:45.663 "enable_recv_pipe": true, 00:14:45.663 "enable_quickack": false, 00:14:45.663 "enable_placement_id": 0, 00:14:45.663 "enable_zerocopy_send_server": false, 00:14:45.663 "enable_zerocopy_send_client": false, 00:14:45.663 "zerocopy_threshold": 0, 00:14:45.663 "tls_version": 0, 00:14:45.663 "enable_ktls": false 00:14:45.663 } 00:14:45.663 } 00:14:45.663 ] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "vmd", 00:14:45.663 "config": [] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "accel", 00:14:45.663 "config": [ 00:14:45.663 { 00:14:45.663 "method": "accel_set_options", 00:14:45.663 "params": { 00:14:45.663 "small_cache_size": 128, 00:14:45.663 "large_cache_size": 16, 00:14:45.663 "task_count": 2048, 00:14:45.663 "sequence_count": 2048, 00:14:45.663 "buf_count": 2048 00:14:45.663 } 00:14:45.663 } 00:14:45.663 ] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "bdev", 00:14:45.663 "config": [ 00:14:45.663 { 00:14:45.663 "method": "bdev_set_options", 00:14:45.663 "params": { 00:14:45.663 "bdev_io_pool_size": 65535, 00:14:45.663 "bdev_io_cache_size": 256, 00:14:45.663 "bdev_auto_examine": true, 00:14:45.663 "iobuf_small_cache_size": 128, 00:14:45.663 "iobuf_large_cache_size": 16 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "bdev_raid_set_options", 00:14:45.663 "params": { 00:14:45.663 "process_window_size_kb": 1024, 00:14:45.663 "process_max_bandwidth_mb_sec": 0 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "bdev_iscsi_set_options", 00:14:45.663 "params": { 00:14:45.663 "timeout_sec": 30 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "bdev_nvme_set_options", 00:14:45.663 "params": { 00:14:45.663 "action_on_timeout": "none", 00:14:45.663 "timeout_us": 0, 00:14:45.663 "timeout_admin_us": 0, 00:14:45.663 "keep_alive_timeout_ms": 10000, 00:14:45.663 "arbitration_burst": 0, 00:14:45.663 "low_priority_weight": 0, 00:14:45.663 "medium_priority_weight": 0, 00:14:45.663 "high_priority_weight": 0, 00:14:45.663 "nvme_adminq_poll_period_us": 10000, 00:14:45.663 "nvme_ioq_poll_period_us": 0, 00:14:45.663 "io_queue_requests": 0, 00:14:45.663 "delay_cmd_submit": true, 00:14:45.663 "transport_retry_count": 4, 00:14:45.663 "bdev_retry_count": 3, 00:14:45.663 "transport_ack_timeout": 0, 00:14:45.663 "ctrlr_loss_timeout_sec": 0, 00:14:45.663 "reconnect_delay_sec": 0, 00:14:45.663 "fast_io_fail_timeout_sec": 0, 00:14:45.663 "disable_auto_failback": false, 00:14:45.663 "generate_uuids": false, 00:14:45.663 "transport_tos": 0, 00:14:45.663 "nvme_error_stat": false, 00:14:45.663 "rdma_srq_size": 0, 00:14:45.663 "io_path_stat": false, 00:14:45.663 "allow_accel_sequence": false, 00:14:45.663 "rdma_max_cq_size": 0, 00:14:45.663 "rdma_cm_event_timeout_ms": 0, 00:14:45.663 "dhchap_digests": [ 00:14:45.663 "sha256", 00:14:45.663 "sha384", 00:14:45.663 "sha512" 00:14:45.663 ], 00:14:45.663 "dhchap_dhgroups": [ 00:14:45.663 "null", 00:14:45.663 "ffdhe2048", 00:14:45.663 "ffdhe3072", 00:14:45.663 "ffdhe4096", 00:14:45.663 "ffdhe6144", 00:14:45.663 "ffdhe8192" 00:14:45.663 ] 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "bdev_nvme_set_hotplug", 00:14:45.663 "params": { 00:14:45.663 "period_us": 100000, 00:14:45.663 "enable": false 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "bdev_malloc_create", 00:14:45.663 "params": { 00:14:45.663 "name": "malloc0", 00:14:45.663 "num_blocks": 8192, 00:14:45.663 "block_size": 4096, 00:14:45.663 "physical_block_size": 4096, 00:14:45.663 "uuid": "a6bc8fbe-4246-4a21-af96-21f411dd3b72", 00:14:45.663 "optimal_io_boundary": 0, 00:14:45.663 "md_size": 0, 00:14:45.663 "dif_type": 0, 00:14:45.663 "dif_is_head_of_md": false, 00:14:45.663 "dif_pi_format": 0 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "method": "bdev_wait_for_examine" 00:14:45.663 } 00:14:45.663 ] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "nbd", 00:14:45.663 "config": [] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "scheduler", 00:14:45.663 "config": [ 00:14:45.663 { 00:14:45.663 "method": "framework_set_scheduler", 00:14:45.663 "params": { 00:14:45.663 "name": "static" 00:14:45.663 } 00:14:45.663 } 00:14:45.663 ] 00:14:45.663 }, 00:14:45.663 { 00:14:45.663 "subsystem": "nvmf", 00:14:45.663 "config": [ 00:14:45.663 { 00:14:45.663 "method": "nvmf_set_config", 00:14:45.663 "params": { 00:14:45.663 "discovery_filter": "match_any", 00:14:45.663 "admin_cmd_passthru": { 00:14:45.663 "identify_ctrlr": false 00:14:45.663 }, 00:14:45.663 "dhchap_digests": [ 00:14:45.663 "sha256", 00:14:45.663 "sha384", 00:14:45.663 "sha512" 00:14:45.663 ], 00:14:45.663 "dhchap_dhgroups": [ 00:14:45.663 "null", 00:14:45.663 "ffdhe2048", 00:14:45.663 "ffdhe3072", 00:14:45.663 "ffdhe4096", 00:14:45.663 "ffdhe6144", 00:14:45.663 "ffdhe8192" 00:14:45.663 ] 00:14:45.663 } 00:14:45.663 }, 00:14:45.663 { 00:14:45.664 "method": "nvmf_set_max_subsystems", 00:14:45.664 "params": { 00:14:45.664 "max_subsystems": 1024 00:14:45.664 } 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "method": "nvmf_set_crdt", 00:14:45.664 "params": { 00:14:45.664 "crdt1": 0, 00:14:45.664 "crdt2": 0, 00:14:45.664 "crdt3": 0 00:14:45.664 } 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "method": "nvmf_create_transport", 00:14:45.664 "params": { 00:14:45.664 "trtype": "TCP", 00:14:45.664 "max_queue_depth": 128, 00:14:45.664 "max_io_qpairs_per_ctrlr": 127, 00:14:45.664 "in_capsule_data_size": 4096, 00:14:45.664 "max_io_size": 131072, 00:14:45.664 "io_unit_size": 131072, 00:14:45.664 "max_aq_depth": 128, 00:14:45.664 "num_shared_buffers": 511, 00:14:45.664 "buf_cache_size": 4294967295, 00:14:45.664 "dif_insert_or_strip": false, 00:14:45.664 "zcopy": false, 00:14:45.664 "c2h_success": false, 00:14:45.664 "sock_priority": 0, 00:14:45.664 "abort_timeout_sec": 1, 00:14:45.664 "ack_timeout": 0, 00:14:45.664 "data_wr_pool_size": 0 00:14:45.664 } 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "method": "nvmf_create_subsystem", 00:14:45.664 "params": { 00:14:45.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.664 "allow_any_host": false, 00:14:45.664 "serial_number": "SPDK00000000000001", 00:14:45.664 "model_number": "SPDK bdev Controller", 00:14:45.664 "max_namespaces": 10, 00:14:45.664 "min_cntlid": 1, 00:14:45.664 "max_cntlid": 65519, 00:14:45.664 "ana_reporting": false 00:14:45.664 } 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "method": "nvmf_subsystem_add_host", 00:14:45.664 "params": { 00:14:45.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.664 "host": "nqn.2016-06.io.spdk:host1", 00:14:45.664 "psk": "key0" 00:14:45.664 } 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "method": "nvmf_subsystem_add_ns", 00:14:45.664 "params": { 00:14:45.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.664 "namespace": { 00:14:45.664 "nsid": 1, 00:14:45.664 "bdev_name": "malloc0", 00:14:45.664 "nguid": "A6BC8FBE42464A21AF9621F411DD3B72", 00:14:45.664 "uuid": "a6bc8fbe-4246-4a21-af96-21f411dd3b72", 00:14:45.664 "no_auto_visible": false 00:14:45.664 } 00:14:45.664 } 00:14:45.664 }, 00:14:45.664 { 00:14:45.664 "method": "nvmf_subsystem_add_listener", 00:14:45.664 "params": { 00:14:45.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.664 "listen_address": { 00:14:45.664 "trtype": "TCP", 00:14:45.664 "adrfam": "IPv4", 00:14:45.664 "traddr": "10.0.0.3", 00:14:45.664 "trsvcid": "4420" 00:14:45.664 }, 00:14:45.664 "secure_channel": true 00:14:45.664 } 00:14:45.664 } 00:14:45.664 ] 00:14:45.664 } 00:14:45.664 ] 00:14:45.664 }' 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83897 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83897 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83897 ']' 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.664 00:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.664 [2024-12-17 00:30:31.602641] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:45.664 [2024-12-17 00:30:31.602946] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.923 [2024-12-17 00:30:31.740545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.923 [2024-12-17 00:30:31.775206] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.923 [2024-12-17 00:30:31.775255] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.923 [2024-12-17 00:30:31.775283] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.923 [2024-12-17 00:30:31.775291] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.923 [2024-12-17 00:30:31.775298] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.923 [2024-12-17 00:30:31.775394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.923 [2024-12-17 00:30:31.916923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.181 [2024-12-17 00:30:31.972429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.181 [2024-12-17 00:30:32.015243] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:46.181 [2024-12-17 00:30:32.015514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83928 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83928 /var/tmp/bdevperf.sock 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83928 ']' 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.748 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:46.748 "subsystems": [ 00:14:46.748 { 00:14:46.748 "subsystem": "keyring", 00:14:46.748 "config": [ 00:14:46.748 { 00:14:46.748 "method": "keyring_file_add_key", 00:14:46.748 "params": { 00:14:46.748 "name": "key0", 00:14:46.748 "path": "/tmp/tmp.48HBuRedJV" 00:14:46.748 } 00:14:46.748 } 00:14:46.748 ] 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "subsystem": "iobuf", 00:14:46.748 "config": [ 00:14:46.748 { 00:14:46.748 "method": "iobuf_set_options", 00:14:46.748 "params": { 00:14:46.748 "small_pool_count": 8192, 00:14:46.748 "large_pool_count": 1024, 00:14:46.748 "small_bufsize": 8192, 00:14:46.748 "large_bufsize": 135168 00:14:46.748 } 00:14:46.748 } 00:14:46.748 ] 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "subsystem": "sock", 00:14:46.748 "config": [ 00:14:46.748 { 00:14:46.748 "method": "sock_set_default_impl", 00:14:46.748 "params": { 00:14:46.748 "impl_name": "uring" 00:14:46.748 } 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "method": "sock_impl_set_options", 00:14:46.748 "params": { 00:14:46.748 "impl_name": "ssl", 00:14:46.748 "recv_buf_size": 4096, 00:14:46.748 "send_buf_size": 4096, 00:14:46.748 "enable_recv_pipe": true, 00:14:46.748 "enable_quickack": false, 00:14:46.748 "enable_placement_id": 0, 00:14:46.748 "enable_zerocopy_send_server": true, 00:14:46.748 "enable_zerocopy_send_client": false, 00:14:46.748 "zerocopy_threshold": 0, 00:14:46.748 "tls_version": 0, 00:14:46.748 "enable_ktls": false 00:14:46.748 } 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "method": "sock_impl_set_options", 00:14:46.748 "params": { 00:14:46.748 "impl_name": "posix", 00:14:46.748 "recv_buf_size": 2097152, 00:14:46.748 "send_buf_size": 2097152, 00:14:46.748 "enable_recv_pipe": true, 00:14:46.748 "enable_quickack": false, 00:14:46.748 "enable_placement_id": 0, 00:14:46.748 "enable_zerocopy_send_server": true, 00:14:46.748 "enable_zerocopy_send_client": false, 00:14:46.748 "zerocopy_threshold": 0, 00:14:46.748 "tls_version": 0, 00:14:46.748 "enable_ktls": false 00:14:46.748 } 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "method": "sock_impl_set_options", 00:14:46.748 "params": { 00:14:46.748 "impl_name": "uring", 00:14:46.748 "recv_buf_size": 2097152, 00:14:46.748 "send_buf_size": 2097152, 00:14:46.748 "enable_recv_pipe": true, 00:14:46.748 "enable_quickack": false, 00:14:46.748 "enable_placement_id": 0, 00:14:46.748 "enable_zerocopy_send_server": false, 00:14:46.748 "enable_zerocopy_send_client": false, 00:14:46.748 "zerocopy_threshold": 0, 00:14:46.748 "tls_version": 0, 00:14:46.748 "enable_ktls": false 00:14:46.748 } 00:14:46.748 } 00:14:46.748 ] 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "subsystem": "vmd", 00:14:46.748 "config": [] 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "subsystem": "accel", 00:14:46.748 "config": [ 00:14:46.748 { 00:14:46.748 "method": "accel_set_options", 00:14:46.748 "params": { 00:14:46.748 "small_cache_size": 128, 00:14:46.748 "large_cache_size": 16, 00:14:46.748 "task_count": 2048, 00:14:46.748 "sequence_count": 2048, 00:14:46.748 "buf_count": 2048 00:14:46.748 } 00:14:46.748 } 00:14:46.748 ] 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "subsystem": "bdev", 00:14:46.748 "config": [ 00:14:46.748 { 00:14:46.748 "method": "bdev_set_options", 00:14:46.748 "params": { 00:14:46.748 "bdev_io_pool_size": 65535, 00:14:46.748 "bdev_io_cache_size": 256, 00:14:46.748 "bdev_auto_examine": true, 00:14:46.748 "iobuf_small_cache_size": 128, 00:14:46.748 "iobuf_large_cache_size": 16 00:14:46.748 } 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "method": "bdev_raid_set_options", 00:14:46.748 "params": { 00:14:46.748 "process_window_size_kb": 1024, 00:14:46.748 "process_max_bandwidth_mb_sec": 0 00:14:46.748 } 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "method": "bdev_iscsi_set_options", 00:14:46.748 "params": { 00:14:46.748 "timeout_sec": 30 00:14:46.748 } 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "method": "bdev_nvme_set_options", 00:14:46.748 "params": { 00:14:46.748 "action_on_timeout": "none", 00:14:46.748 "timeout_us": 0, 00:14:46.748 "timeout_admin_us": 0, 00:14:46.748 "keep_alive_timeout_ms": 10000, 00:14:46.748 "arbitration_burst": 0, 00:14:46.748 "low_priority_weight": 0, 00:14:46.748 "medium_priority_weight": 0, 00:14:46.748 "high_priority_weight": 0, 00:14:46.748 "nvme_adminq_poll_period_us": 10000, 00:14:46.748 "nvme_ioq_poll_period_us": 0, 00:14:46.748 "io_queue_requests": 512, 00:14:46.748 "delay_cmd_submit": true, 00:14:46.748 "transport_retry_count": 4, 00:14:46.748 "bdev_retry_count": 3, 00:14:46.748 "transport_ack_timeout": 0, 00:14:46.748 "ctrlr_loss_timeout_sec": 0, 00:14:46.748 "reconnect_delay_sec": 0, 00:14:46.748 "fast_io_fail_timeout_sec": 0, 00:14:46.748 "disable_auto_failback": false, 00:14:46.748 "generate_uuids": false, 00:14:46.748 "transport_tos": 0, 00:14:46.748 "nvme_error_stat": false, 00:14:46.748 "rdma_srq_size": 0, 00:14:46.748 "io_path_stat": false, 00:14:46.748 "allow_accel_sequence": false, 00:14:46.748 "rdma_max_cq_size": 0, 00:14:46.748 "rdma_cm_event_timeout_ms": 0, 00:14:46.748 "dhchap_digests": [ 00:14:46.748 "sha256", 00:14:46.748 "sha384", 00:14:46.748 "sha512" 00:14:46.748 ], 00:14:46.748 "dhchap_dhgroups": [ 00:14:46.748 "null", 00:14:46.748 "ffdhe2048", 00:14:46.748 "ffdhe3072", 00:14:46.748 "ffdhe4096", 00:14:46.748 "ffdhe6144", 00:14:46.748 "ffdhe8192" 00:14:46.748 ] 00:14:46.748 } 00:14:46.748 }, 00:14:46.748 { 00:14:46.748 "method": "bdev_nvme_attach_controller", 00:14:46.748 "params": { 00:14:46.748 "name": "TLSTEST", 00:14:46.748 "trtype": "TCP", 00:14:46.748 "adrfam": "IPv4", 00:14:46.748 "traddr": "10.0.0.3", 00:14:46.748 "trsvcid": "4420", 00:14:46.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.748 "prchk_reftag": false, 00:14:46.748 "prchk_guard": false, 00:14:46.748 "ctrlr_loss_timeout_sec": 0, 00:14:46.748 "reconnect_delay_sec": 0, 00:14:46.748 "fast_io_fail_timeout_sec": 0, 00:14:46.749 "psk": "key0", 00:14:46.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.749 "hdgst": false, 00:14:46.749 "ddgst": false 00:14:46.749 } 00:14:46.749 }, 00:14:46.749 { 00:14:46.749 "method": "bdev_nvme_set_hotplug", 00:14:46.749 "params": { 00:14:46.749 "period_us": 100000, 00:14:46.749 "enable": false 00:14:46.749 } 00:14:46.749 }, 00:14:46.749 { 00:14:46.749 "method": "bdev_wait_for_examine" 00:14:46.749 } 00:14:46.749 ] 00:14:46.749 }, 00:14:46.749 { 00:14:46.749 "subsystem": "nbd", 00:14:46.749 "config": [] 00:14:46.749 } 00:14:46.749 ] 00:14:46.749 }' 00:14:46.749 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.749 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.749 00:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.749 [2024-12-17 00:30:32.664275] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:46.749 [2024-12-17 00:30:32.664707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83928 ] 00:14:47.007 [2024-12-17 00:30:32.804897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.007 [2024-12-17 00:30:32.847612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.007 [2024-12-17 00:30:32.962168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.007 [2024-12-17 00:30:32.993616] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.942 00:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.942 00:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:47.942 00:30:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:47.942 Running I/O for 10 seconds... 00:14:49.810 4224.00 IOPS, 16.50 MiB/s [2024-12-17T00:30:37.188Z] 4262.00 IOPS, 16.65 MiB/s [2024-12-17T00:30:37.754Z] 4248.00 IOPS, 16.59 MiB/s [2024-12-17T00:30:38.751Z] 4213.25 IOPS, 16.46 MiB/s [2024-12-17T00:30:40.126Z] 4162.00 IOPS, 16.26 MiB/s [2024-12-17T00:30:41.061Z] 4129.33 IOPS, 16.13 MiB/s [2024-12-17T00:30:41.996Z] 4101.86 IOPS, 16.02 MiB/s [2024-12-17T00:30:42.930Z] 4082.38 IOPS, 15.95 MiB/s [2024-12-17T00:30:43.864Z] 4072.22 IOPS, 15.91 MiB/s [2024-12-17T00:30:43.864Z] 4068.10 IOPS, 15.89 MiB/s 00:14:57.861 Latency(us) 00:14:57.861 [2024-12-17T00:30:43.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.861 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:57.861 Verification LBA range: start 0x0 length 0x2000 00:14:57.861 TLSTESTn1 : 10.02 4074.24 15.92 0.00 0.00 31359.67 5659.93 25499.46 00:14:57.861 [2024-12-17T00:30:43.864Z] =================================================================================================================== 00:14:57.861 [2024-12-17T00:30:43.864Z] Total : 4074.24 15.92 0.00 0.00 31359.67 5659.93 25499.46 00:14:57.861 { 00:14:57.861 "results": [ 00:14:57.861 { 00:14:57.861 "job": "TLSTESTn1", 00:14:57.861 "core_mask": "0x4", 00:14:57.861 "workload": "verify", 00:14:57.861 "status": "finished", 00:14:57.861 "verify_range": { 00:14:57.861 "start": 0, 00:14:57.861 "length": 8192 00:14:57.861 }, 00:14:57.861 "queue_depth": 128, 00:14:57.861 "io_size": 4096, 00:14:57.861 "runtime": 10.015848, 00:14:57.861 "iops": 4074.243139472564, 00:14:57.861 "mibps": 15.915012263564703, 00:14:57.861 "io_failed": 0, 00:14:57.861 "io_timeout": 0, 00:14:57.861 "avg_latency_us": 31359.668842555977, 00:14:57.861 "min_latency_us": 5659.927272727273, 00:14:57.861 "max_latency_us": 25499.46181818182 00:14:57.861 } 00:14:57.861 ], 00:14:57.861 "core_count": 1 00:14:57.861 } 00:14:57.861 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.861 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83928 00:14:57.861 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83928 ']' 00:14:57.861 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83928 00:14:57.861 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:57.861 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.861 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83928 00:14:57.861 killing process with pid 83928 00:14:57.861 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.861 00:14:57.862 Latency(us) 00:14:57.862 [2024-12-17T00:30:43.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.862 [2024-12-17T00:30:43.865Z] =================================================================================================================== 00:14:57.862 [2024-12-17T00:30:43.865Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.862 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:57.862 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:57.862 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83928' 00:14:57.862 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83928 00:14:57.862 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83928 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83897 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83897 ']' 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83897 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83897 00:14:58.120 killing process with pid 83897 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83897' 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83897 00:14:58.120 00:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83897 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84068 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84068 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84068 ']' 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.379 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.379 [2024-12-17 00:30:44.217025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:58.379 [2024-12-17 00:30:44.217131] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.379 [2024-12-17 00:30:44.352700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.637 [2024-12-17 00:30:44.392633] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.637 [2024-12-17 00:30:44.392699] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.637 [2024-12-17 00:30:44.392714] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.637 [2024-12-17 00:30:44.392724] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.637 [2024-12-17 00:30:44.392733] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.637 [2024-12-17 00:30:44.392767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.637 [2024-12-17 00:30:44.425587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:58.637 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.637 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:58.637 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:58.637 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.638 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.638 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.638 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.48HBuRedJV 00:14:58.638 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.48HBuRedJV 00:14:58.638 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.896 [2024-12-17 00:30:44.755486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.896 00:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:59.154 00:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:59.412 [2024-12-17 00:30:45.279621] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:59.412 [2024-12-17 00:30:45.279858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:59.412 00:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:59.669 malloc0 00:14:59.669 00:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.928 00:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:15:00.186 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84116 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84116 /var/tmp/bdevperf.sock 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84116 ']' 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.444 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.445 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.445 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.703 [2024-12-17 00:30:46.478050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:00.703 [2024-12-17 00:30:46.478299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84116 ] 00:15:00.703 [2024-12-17 00:30:46.615648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.703 [2024-12-17 00:30:46.657254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.703 [2024-12-17 00:30:46.689903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.961 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.961 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:00.961 00:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:15:01.219 00:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:01.477 [2024-12-17 00:30:47.255478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.477 nvme0n1 00:15:01.477 00:30:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:01.477 Running I/O for 1 seconds... 00:15:02.854 4096.00 IOPS, 16.00 MiB/s 00:15:02.854 Latency(us) 00:15:02.854 [2024-12-17T00:30:48.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.854 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:02.854 Verification LBA range: start 0x0 length 0x2000 00:15:02.854 nvme0n1 : 1.03 4114.55 16.07 0.00 0.00 30755.89 7060.01 19541.64 00:15:02.854 [2024-12-17T00:30:48.857Z] =================================================================================================================== 00:15:02.854 [2024-12-17T00:30:48.857Z] Total : 4114.55 16.07 0.00 0.00 30755.89 7060.01 19541.64 00:15:02.854 { 00:15:02.854 "results": [ 00:15:02.854 { 00:15:02.854 "job": "nvme0n1", 00:15:02.854 "core_mask": "0x2", 00:15:02.854 "workload": "verify", 00:15:02.854 "status": "finished", 00:15:02.854 "verify_range": { 00:15:02.854 "start": 0, 00:15:02.854 "length": 8192 00:15:02.854 }, 00:15:02.854 "queue_depth": 128, 00:15:02.854 "io_size": 4096, 00:15:02.854 "runtime": 1.0266, 00:15:02.854 "iops": 4114.552893045003, 00:15:02.854 "mibps": 16.072472238457042, 00:15:02.854 "io_failed": 0, 00:15:02.854 "io_timeout": 0, 00:15:02.854 "avg_latency_us": 30755.89289256198, 00:15:02.854 "min_latency_us": 7060.014545454545, 00:15:02.854 "max_latency_us": 19541.643636363635 00:15:02.854 } 00:15:02.854 ], 00:15:02.854 "core_count": 1 00:15:02.854 } 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84116 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84116 ']' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84116 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84116 00:15:02.854 killing process with pid 84116 00:15:02.854 Received shutdown signal, test time was about 1.000000 seconds 00:15:02.854 00:15:02.854 Latency(us) 00:15:02.854 [2024-12-17T00:30:48.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.854 [2024-12-17T00:30:48.857Z] =================================================================================================================== 00:15:02.854 [2024-12-17T00:30:48.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84116' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84116 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84116 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84068 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84068 ']' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84068 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84068 00:15:02.854 killing process with pid 84068 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84068' 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84068 00:15:02.854 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84068 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84154 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84154 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84154 ']' 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.113 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.114 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.114 00:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.114 [2024-12-17 00:30:48.920791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:03.114 [2024-12-17 00:30:48.921172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.114 [2024-12-17 00:30:49.054960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.114 [2024-12-17 00:30:49.091078] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.114 [2024-12-17 00:30:49.091137] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.114 [2024-12-17 00:30:49.091149] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.114 [2024-12-17 00:30:49.091158] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.114 [2024-12-17 00:30:49.091165] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.114 [2024-12-17 00:30:49.091192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.372 [2024-12-17 00:30:49.121152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.372 [2024-12-17 00:30:49.213180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.372 malloc0 00:15:03.372 [2024-12-17 00:30:49.248083] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.372 [2024-12-17 00:30:49.248467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84179 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84179 /var/tmp/bdevperf.sock 00:15:03.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84179 ']' 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.372 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.372 [2024-12-17 00:30:49.324885] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:03.372 [2024-12-17 00:30:49.325651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84179 ] 00:15:03.630 [2024-12-17 00:30:49.458434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.630 [2024-12-17 00:30:49.492866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.630 [2024-12-17 00:30:49.521256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.630 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.630 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:03.630 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.48HBuRedJV 00:15:03.888 00:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:04.146 [2024-12-17 00:30:50.064378] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.146 nvme0n1 00:15:04.146 00:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:04.404 Running I/O for 1 seconds... 00:15:05.339 3968.00 IOPS, 15.50 MiB/s 00:15:05.339 Latency(us) 00:15:05.339 [2024-12-17T00:30:51.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.339 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:05.339 Verification LBA range: start 0x0 length 0x2000 00:15:05.339 nvme0n1 : 1.02 4002.29 15.63 0.00 0.00 31638.86 7268.54 19422.49 00:15:05.339 [2024-12-17T00:30:51.342Z] =================================================================================================================== 00:15:05.339 [2024-12-17T00:30:51.342Z] Total : 4002.29 15.63 0.00 0.00 31638.86 7268.54 19422.49 00:15:05.339 { 00:15:05.339 "results": [ 00:15:05.339 { 00:15:05.339 "job": "nvme0n1", 00:15:05.339 "core_mask": "0x2", 00:15:05.339 "workload": "verify", 00:15:05.339 "status": "finished", 00:15:05.339 "verify_range": { 00:15:05.339 "start": 0, 00:15:05.339 "length": 8192 00:15:05.339 }, 00:15:05.339 "queue_depth": 128, 00:15:05.339 "io_size": 4096, 00:15:05.339 "runtime": 1.023414, 00:15:05.339 "iops": 4002.2903732018517, 00:15:05.339 "mibps": 15.633946770319733, 00:15:05.339 "io_failed": 0, 00:15:05.339 "io_timeout": 0, 00:15:05.339 "avg_latency_us": 31638.861818181816, 00:15:05.339 "min_latency_us": 7268.538181818182, 00:15:05.339 "max_latency_us": 19422.487272727274 00:15:05.339 } 00:15:05.339 ], 00:15:05.339 "core_count": 1 00:15:05.339 } 00:15:05.339 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:05.339 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.339 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.598 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.598 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:05.598 "subsystems": [ 00:15:05.598 { 00:15:05.598 "subsystem": "keyring", 00:15:05.598 "config": [ 00:15:05.598 { 00:15:05.598 "method": "keyring_file_add_key", 00:15:05.598 "params": { 00:15:05.598 "name": "key0", 00:15:05.598 "path": "/tmp/tmp.48HBuRedJV" 00:15:05.598 } 00:15:05.598 } 00:15:05.598 ] 00:15:05.598 }, 00:15:05.598 { 00:15:05.598 "subsystem": "iobuf", 00:15:05.598 "config": [ 00:15:05.598 { 00:15:05.598 "method": "iobuf_set_options", 00:15:05.598 "params": { 00:15:05.598 "small_pool_count": 8192, 00:15:05.598 "large_pool_count": 1024, 00:15:05.598 "small_bufsize": 8192, 00:15:05.598 "large_bufsize": 135168 00:15:05.598 } 00:15:05.598 } 00:15:05.598 ] 00:15:05.598 }, 00:15:05.598 { 00:15:05.598 "subsystem": "sock", 00:15:05.598 "config": [ 00:15:05.598 { 00:15:05.598 "method": "sock_set_default_impl", 00:15:05.598 "params": { 00:15:05.598 "impl_name": "uring" 00:15:05.598 } 00:15:05.598 }, 00:15:05.598 { 00:15:05.598 "method": "sock_impl_set_options", 00:15:05.598 "params": { 00:15:05.598 "impl_name": "ssl", 00:15:05.598 "recv_buf_size": 4096, 00:15:05.598 "send_buf_size": 4096, 00:15:05.598 "enable_recv_pipe": true, 00:15:05.598 "enable_quickack": false, 00:15:05.598 "enable_placement_id": 0, 00:15:05.598 "enable_zerocopy_send_server": true, 00:15:05.598 "enable_zerocopy_send_client": false, 00:15:05.598 "zerocopy_threshold": 0, 00:15:05.598 "tls_version": 0, 00:15:05.598 "enable_ktls": false 00:15:05.598 } 00:15:05.598 }, 00:15:05.598 { 00:15:05.598 "method": "sock_impl_set_options", 00:15:05.598 "params": { 00:15:05.598 "impl_name": "posix", 00:15:05.598 "recv_buf_size": 2097152, 00:15:05.598 "send_buf_size": 2097152, 00:15:05.598 "enable_recv_pipe": true, 00:15:05.598 "enable_quickack": false, 00:15:05.598 "enable_placement_id": 0, 00:15:05.598 "enable_zerocopy_send_server": true, 00:15:05.598 "enable_zerocopy_send_client": false, 00:15:05.598 "zerocopy_threshold": 0, 00:15:05.598 "tls_version": 0, 00:15:05.598 "enable_ktls": false 00:15:05.598 } 00:15:05.598 }, 00:15:05.598 { 00:15:05.598 "method": "sock_impl_set_options", 00:15:05.598 "params": { 00:15:05.598 "impl_name": "uring", 00:15:05.598 "recv_buf_size": 2097152, 00:15:05.598 "send_buf_size": 2097152, 00:15:05.598 "enable_recv_pipe": true, 00:15:05.598 "enable_quickack": false, 00:15:05.598 "enable_placement_id": 0, 00:15:05.598 "enable_zerocopy_send_server": false, 00:15:05.598 "enable_zerocopy_send_client": false, 00:15:05.598 "zerocopy_threshold": 0, 00:15:05.598 "tls_version": 0, 00:15:05.598 "enable_ktls": false 00:15:05.598 } 00:15:05.598 } 00:15:05.598 ] 00:15:05.598 }, 00:15:05.598 { 00:15:05.598 "subsystem": "vmd", 00:15:05.598 "config": [] 00:15:05.598 }, 00:15:05.598 { 00:15:05.598 "subsystem": "accel", 00:15:05.598 "config": [ 00:15:05.598 { 00:15:05.598 "method": "accel_set_options", 00:15:05.599 "params": { 00:15:05.599 "small_cache_size": 128, 00:15:05.599 "large_cache_size": 16, 00:15:05.599 "task_count": 2048, 00:15:05.599 "sequence_count": 2048, 00:15:05.599 "buf_count": 2048 00:15:05.599 } 00:15:05.599 } 00:15:05.599 ] 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "subsystem": "bdev", 00:15:05.599 "config": [ 00:15:05.599 { 00:15:05.599 "method": "bdev_set_options", 00:15:05.599 "params": { 00:15:05.599 "bdev_io_pool_size": 65535, 00:15:05.599 "bdev_io_cache_size": 256, 00:15:05.599 "bdev_auto_examine": true, 00:15:05.599 "iobuf_small_cache_size": 128, 00:15:05.599 "iobuf_large_cache_size": 16 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "bdev_raid_set_options", 00:15:05.599 "params": { 00:15:05.599 "process_window_size_kb": 1024, 00:15:05.599 "process_max_bandwidth_mb_sec": 0 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "bdev_iscsi_set_options", 00:15:05.599 "params": { 00:15:05.599 "timeout_sec": 30 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "bdev_nvme_set_options", 00:15:05.599 "params": { 00:15:05.599 "action_on_timeout": "none", 00:15:05.599 "timeout_us": 0, 00:15:05.599 "timeout_admin_us": 0, 00:15:05.599 "keep_alive_timeout_ms": 10000, 00:15:05.599 "arbitration_burst": 0, 00:15:05.599 "low_priority_weight": 0, 00:15:05.599 "medium_priority_weight": 0, 00:15:05.599 "high_priority_weight": 0, 00:15:05.599 "nvme_adminq_poll_period_us": 10000, 00:15:05.599 "nvme_ioq_poll_period_us": 0, 00:15:05.599 "io_queue_requests": 0, 00:15:05.599 "delay_cmd_submit": true, 00:15:05.599 "transport_retry_count": 4, 00:15:05.599 "bdev_retry_count": 3, 00:15:05.599 "transport_ack_timeout": 0, 00:15:05.599 "ctrlr_loss_timeout_sec": 0, 00:15:05.599 "reconnect_delay_sec": 0, 00:15:05.599 "fast_io_fail_timeout_sec": 0, 00:15:05.599 "disable_auto_failback": false, 00:15:05.599 "generate_uuids": false, 00:15:05.599 "transport_tos": 0, 00:15:05.599 "nvme_error_stat": false, 00:15:05.599 "rdma_srq_size": 0, 00:15:05.599 "io_path_stat": false, 00:15:05.599 "allow_accel_sequence": false, 00:15:05.599 "rdma_max_cq_size": 0, 00:15:05.599 "rdma_cm_event_timeout_ms": 0, 00:15:05.599 "dhchap_digests": [ 00:15:05.599 "sha256", 00:15:05.599 "sha384", 00:15:05.599 "sha512" 00:15:05.599 ], 00:15:05.599 "dhchap_dhgroups": [ 00:15:05.599 "null", 00:15:05.599 "ffdhe2048", 00:15:05.599 "ffdhe3072", 00:15:05.599 "ffdhe4096", 00:15:05.599 "ffdhe6144", 00:15:05.599 "ffdhe8192" 00:15:05.599 ] 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "bdev_nvme_set_hotplug", 00:15:05.599 "params": { 00:15:05.599 "period_us": 100000, 00:15:05.599 "enable": false 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "bdev_malloc_create", 00:15:05.599 "params": { 00:15:05.599 "name": "malloc0", 00:15:05.599 "num_blocks": 8192, 00:15:05.599 "block_size": 4096, 00:15:05.599 "physical_block_size": 4096, 00:15:05.599 "uuid": "46bedbc1-8471-49db-8fbe-1d96b6af186f", 00:15:05.599 "optimal_io_boundary": 0, 00:15:05.599 "md_size": 0, 00:15:05.599 "dif_type": 0, 00:15:05.599 "dif_is_head_of_md": false, 00:15:05.599 "dif_pi_format": 0 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "bdev_wait_for_examine" 00:15:05.599 } 00:15:05.599 ] 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "subsystem": "nbd", 00:15:05.599 "config": [] 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "subsystem": "scheduler", 00:15:05.599 "config": [ 00:15:05.599 { 00:15:05.599 "method": "framework_set_scheduler", 00:15:05.599 "params": { 00:15:05.599 "name": "static" 00:15:05.599 } 00:15:05.599 } 00:15:05.599 ] 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "subsystem": "nvmf", 00:15:05.599 "config": [ 00:15:05.599 { 00:15:05.599 "method": "nvmf_set_config", 00:15:05.599 "params": { 00:15:05.599 "discovery_filter": "match_any", 00:15:05.599 "admin_cmd_passthru": { 00:15:05.599 "identify_ctrlr": false 00:15:05.599 }, 00:15:05.599 "dhchap_digests": [ 00:15:05.599 "sha256", 00:15:05.599 "sha384", 00:15:05.599 "sha512" 00:15:05.599 ], 00:15:05.599 "dhchap_dhgroups": [ 00:15:05.599 "null", 00:15:05.599 "ffdhe2048", 00:15:05.599 "ffdhe3072", 00:15:05.599 "ffdhe4096", 00:15:05.599 "ffdhe6144", 00:15:05.599 "ffdhe8192" 00:15:05.599 ] 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "nvmf_set_max_subsystems", 00:15:05.599 "params": { 00:15:05.599 "max_subsystems": 1024 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "nvmf_set_crdt", 00:15:05.599 "params": { 00:15:05.599 "crdt1": 0, 00:15:05.599 "crdt2": 0, 00:15:05.599 "crdt3": 0 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "nvmf_create_transport", 00:15:05.599 "params": { 00:15:05.599 "trtype": "TCP", 00:15:05.599 "max_queue_depth": 128, 00:15:05.599 "max_io_qpairs_per_ctrlr": 127, 00:15:05.599 "in_capsule_data_size": 4096, 00:15:05.599 "max_io_size": 131072, 00:15:05.599 "io_unit_size": 131072, 00:15:05.599 "max_aq_depth": 128, 00:15:05.599 "num_shared_buffers": 511, 00:15:05.599 "buf_cache_size": 4294967295, 00:15:05.599 "dif_insert_or_strip": false, 00:15:05.599 "zcopy": false, 00:15:05.599 "c2h_success": false, 00:15:05.599 "sock_priority": 0, 00:15:05.599 "abort_timeout_sec": 1, 00:15:05.599 "ack_timeout": 0, 00:15:05.599 "data_wr_pool_size": 0 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "nvmf_create_subsystem", 00:15:05.599 "params": { 00:15:05.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.599 "allow_any_host": false, 00:15:05.599 "serial_number": "00000000000000000000", 00:15:05.599 "model_number": "SPDK bdev Controller", 00:15:05.599 "max_namespaces": 32, 00:15:05.599 "min_cntlid": 1, 00:15:05.599 "max_cntlid": 65519, 00:15:05.599 "ana_reporting": false 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "nvmf_subsystem_add_host", 00:15:05.599 "params": { 00:15:05.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.599 "host": "nqn.2016-06.io.spdk:host1", 00:15:05.599 "psk": "key0" 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "nvmf_subsystem_add_ns", 00:15:05.599 "params": { 00:15:05.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.599 "namespace": { 00:15:05.599 "nsid": 1, 00:15:05.599 "bdev_name": "malloc0", 00:15:05.599 "nguid": "46BEDBC1847149DB8FBE1D96B6AF186F", 00:15:05.599 "uuid": "46bedbc1-8471-49db-8fbe-1d96b6af186f", 00:15:05.599 "no_auto_visible": false 00:15:05.599 } 00:15:05.599 } 00:15:05.599 }, 00:15:05.599 { 00:15:05.599 "method": "nvmf_subsystem_add_listener", 00:15:05.599 "params": { 00:15:05.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.599 "listen_address": { 00:15:05.599 "trtype": "TCP", 00:15:05.599 "adrfam": "IPv4", 00:15:05.599 "traddr": "10.0.0.3", 00:15:05.599 "trsvcid": "4420" 00:15:05.599 }, 00:15:05.599 "secure_channel": false, 00:15:05.599 "sock_impl": "ssl" 00:15:05.599 } 00:15:05.599 } 00:15:05.599 ] 00:15:05.599 } 00:15:05.599 ] 00:15:05.599 }' 00:15:05.599 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:05.872 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:05.873 "subsystems": [ 00:15:05.873 { 00:15:05.873 "subsystem": "keyring", 00:15:05.873 "config": [ 00:15:05.873 { 00:15:05.873 "method": "keyring_file_add_key", 00:15:05.873 "params": { 00:15:05.873 "name": "key0", 00:15:05.873 "path": "/tmp/tmp.48HBuRedJV" 00:15:05.873 } 00:15:05.873 } 00:15:05.873 ] 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "subsystem": "iobuf", 00:15:05.873 "config": [ 00:15:05.873 { 00:15:05.873 "method": "iobuf_set_options", 00:15:05.873 "params": { 00:15:05.873 "small_pool_count": 8192, 00:15:05.873 "large_pool_count": 1024, 00:15:05.873 "small_bufsize": 8192, 00:15:05.873 "large_bufsize": 135168 00:15:05.873 } 00:15:05.873 } 00:15:05.873 ] 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "subsystem": "sock", 00:15:05.873 "config": [ 00:15:05.873 { 00:15:05.873 "method": "sock_set_default_impl", 00:15:05.873 "params": { 00:15:05.873 "impl_name": "uring" 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "sock_impl_set_options", 00:15:05.873 "params": { 00:15:05.873 "impl_name": "ssl", 00:15:05.873 "recv_buf_size": 4096, 00:15:05.873 "send_buf_size": 4096, 00:15:05.873 "enable_recv_pipe": true, 00:15:05.873 "enable_quickack": false, 00:15:05.873 "enable_placement_id": 0, 00:15:05.873 "enable_zerocopy_send_server": true, 00:15:05.873 "enable_zerocopy_send_client": false, 00:15:05.873 "zerocopy_threshold": 0, 00:15:05.873 "tls_version": 0, 00:15:05.873 "enable_ktls": false 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "sock_impl_set_options", 00:15:05.873 "params": { 00:15:05.873 "impl_name": "posix", 00:15:05.873 "recv_buf_size": 2097152, 00:15:05.873 "send_buf_size": 2097152, 00:15:05.873 "enable_recv_pipe": true, 00:15:05.873 "enable_quickack": false, 00:15:05.873 "enable_placement_id": 0, 00:15:05.873 "enable_zerocopy_send_server": true, 00:15:05.873 "enable_zerocopy_send_client": false, 00:15:05.873 "zerocopy_threshold": 0, 00:15:05.873 "tls_version": 0, 00:15:05.873 "enable_ktls": false 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "sock_impl_set_options", 00:15:05.873 "params": { 00:15:05.873 "impl_name": "uring", 00:15:05.873 "recv_buf_size": 2097152, 00:15:05.873 "send_buf_size": 2097152, 00:15:05.873 "enable_recv_pipe": true, 00:15:05.873 "enable_quickack": false, 00:15:05.873 "enable_placement_id": 0, 00:15:05.873 "enable_zerocopy_send_server": false, 00:15:05.873 "enable_zerocopy_send_client": false, 00:15:05.873 "zerocopy_threshold": 0, 00:15:05.873 "tls_version": 0, 00:15:05.873 "enable_ktls": false 00:15:05.873 } 00:15:05.873 } 00:15:05.873 ] 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "subsystem": "vmd", 00:15:05.873 "config": [] 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "subsystem": "accel", 00:15:05.873 "config": [ 00:15:05.873 { 00:15:05.873 "method": "accel_set_options", 00:15:05.873 "params": { 00:15:05.873 "small_cache_size": 128, 00:15:05.873 "large_cache_size": 16, 00:15:05.873 "task_count": 2048, 00:15:05.873 "sequence_count": 2048, 00:15:05.873 "buf_count": 2048 00:15:05.873 } 00:15:05.873 } 00:15:05.873 ] 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "subsystem": "bdev", 00:15:05.873 "config": [ 00:15:05.873 { 00:15:05.873 "method": "bdev_set_options", 00:15:05.873 "params": { 00:15:05.873 "bdev_io_pool_size": 65535, 00:15:05.873 "bdev_io_cache_size": 256, 00:15:05.873 "bdev_auto_examine": true, 00:15:05.873 "iobuf_small_cache_size": 128, 00:15:05.873 "iobuf_large_cache_size": 16 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "bdev_raid_set_options", 00:15:05.873 "params": { 00:15:05.873 "process_window_size_kb": 1024, 00:15:05.873 "process_max_bandwidth_mb_sec": 0 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "bdev_iscsi_set_options", 00:15:05.873 "params": { 00:15:05.873 "timeout_sec": 30 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "bdev_nvme_set_options", 00:15:05.873 "params": { 00:15:05.873 "action_on_timeout": "none", 00:15:05.873 "timeout_us": 0, 00:15:05.873 "timeout_admin_us": 0, 00:15:05.873 "keep_alive_timeout_ms": 10000, 00:15:05.873 "arbitration_burst": 0, 00:15:05.873 "low_priority_weight": 0, 00:15:05.873 "medium_priority_weight": 0, 00:15:05.873 "high_priority_weight": 0, 00:15:05.873 "nvme_adminq_poll_period_us": 10000, 00:15:05.873 "nvme_ioq_poll_period_us": 0, 00:15:05.873 "io_queue_requests": 512, 00:15:05.873 "delay_cmd_submit": true, 00:15:05.873 "transport_retry_count": 4, 00:15:05.873 "bdev_retry_count": 3, 00:15:05.873 "transport_ack_timeout": 0, 00:15:05.873 "ctrlr_loss_timeout_sec": 0, 00:15:05.873 "reconnect_delay_sec": 0, 00:15:05.873 "fast_io_fail_timeout_sec": 0, 00:15:05.873 "disable_auto_failback": false, 00:15:05.873 "generate_uuids": false, 00:15:05.873 "transport_tos": 0, 00:15:05.873 "nvme_error_stat": false, 00:15:05.873 "rdma_srq_size": 0, 00:15:05.873 "io_path_stat": false, 00:15:05.873 "allow_accel_sequence": false, 00:15:05.873 "rdma_max_cq_size": 0, 00:15:05.873 "rdma_cm_event_timeout_ms": 0, 00:15:05.873 "dhchap_digests": [ 00:15:05.873 "sha256", 00:15:05.873 "sha384", 00:15:05.873 "sha512" 00:15:05.873 ], 00:15:05.873 "dhchap_dhgroups": [ 00:15:05.873 "null", 00:15:05.873 "ffdhe2048", 00:15:05.873 "ffdhe3072", 00:15:05.873 "ffdhe4096", 00:15:05.873 "ffdhe6144", 00:15:05.873 "ffdhe8192" 00:15:05.873 ] 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "bdev_nvme_attach_controller", 00:15:05.873 "params": { 00:15:05.873 "name": "nvme0", 00:15:05.873 "trtype": "TCP", 00:15:05.873 "adrfam": "IPv4", 00:15:05.873 "traddr": "10.0.0.3", 00:15:05.873 "trsvcid": "4420", 00:15:05.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.873 "prchk_reftag": false, 00:15:05.873 "prchk_guard": false, 00:15:05.873 "ctrlr_loss_timeout_sec": 0, 00:15:05.873 "reconnect_delay_sec": 0, 00:15:05.873 "fast_io_fail_timeout_sec": 0, 00:15:05.873 "psk": "key0", 00:15:05.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.873 "hdgst": false, 00:15:05.873 "ddgst": false 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "bdev_nvme_set_hotplug", 00:15:05.873 "params": { 00:15:05.873 "period_us": 100000, 00:15:05.873 "enable": false 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "bdev_enable_histogram", 00:15:05.873 "params": { 00:15:05.873 "name": "nvme0n1", 00:15:05.873 "enable": true 00:15:05.873 } 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "method": "bdev_wait_for_examine" 00:15:05.873 } 00:15:05.873 ] 00:15:05.873 }, 00:15:05.873 { 00:15:05.873 "subsystem": "nbd", 00:15:05.873 "config": [] 00:15:05.873 } 00:15:05.873 ] 00:15:05.873 }' 00:15:05.873 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84179 00:15:05.873 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84179 ']' 00:15:05.873 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84179 00:15:05.873 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:05.873 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:05.873 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84179 00:15:05.873 killing process with pid 84179 00:15:05.873 Received shutdown signal, test time was about 1.000000 seconds 00:15:05.873 00:15:05.873 Latency(us) 00:15:05.873 [2024-12-17T00:30:51.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.873 [2024-12-17T00:30:51.876Z] =================================================================================================================== 00:15:05.873 [2024-12-17T00:30:51.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.874 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:05.874 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:05.874 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84179' 00:15:05.874 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84179 00:15:05.874 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84179 00:15:06.156 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84154 00:15:06.156 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84154 ']' 00:15:06.156 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84154 00:15:06.156 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:06.157 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.157 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84154 00:15:06.157 killing process with pid 84154 00:15:06.157 00:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84154' 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84154 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84154 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:06.157 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:06.157 "subsystems": [ 00:15:06.157 { 00:15:06.157 "subsystem": "keyring", 00:15:06.157 "config": [ 00:15:06.157 { 00:15:06.157 "method": "keyring_file_add_key", 00:15:06.157 "params": { 00:15:06.157 "name": "key0", 00:15:06.157 "path": "/tmp/tmp.48HBuRedJV" 00:15:06.157 } 00:15:06.157 } 00:15:06.157 ] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "iobuf", 00:15:06.157 "config": [ 00:15:06.157 { 00:15:06.157 "method": "iobuf_set_options", 00:15:06.157 "params": { 00:15:06.157 "small_pool_count": 8192, 00:15:06.157 "large_pool_count": 1024, 00:15:06.157 "small_bufsize": 8192, 00:15:06.157 "large_bufsize": 135168 00:15:06.157 } 00:15:06.157 } 00:15:06.157 ] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "sock", 00:15:06.157 "config": [ 00:15:06.157 { 00:15:06.157 "method": "sock_set_default_impl", 00:15:06.157 "params": { 00:15:06.157 "impl_name": "uring" 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "sock_impl_set_options", 00:15:06.157 "params": { 00:15:06.157 "impl_name": "ssl", 00:15:06.157 "recv_buf_size": 4096, 00:15:06.157 "send_buf_size": 4096, 00:15:06.157 "enable_recv_pipe": true, 00:15:06.157 "enable_quickack": false, 00:15:06.157 "enable_placement_id": 0, 00:15:06.157 "enable_zerocopy_send_server": true, 00:15:06.157 "enable_zerocopy_send_client": false, 00:15:06.157 "zerocopy_threshold": 0, 00:15:06.157 "tls_version": 0, 00:15:06.157 "enable_ktls": false 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "sock_impl_set_options", 00:15:06.157 "params": { 00:15:06.157 "impl_name": "posix", 00:15:06.157 "recv_buf_size": 2097152, 00:15:06.157 "send_buf_size": 2097152, 00:15:06.157 "enable_recv_pipe": true, 00:15:06.157 "enable_quickack": false, 00:15:06.157 "enable_placement_id": 0, 00:15:06.157 "enable_zerocopy_send_server": true, 00:15:06.157 "enable_zerocopy_send_client": false, 00:15:06.157 "zerocopy_threshold": 0, 00:15:06.157 "tls_version": 0, 00:15:06.157 "enable_ktls": false 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "sock_impl_set_options", 00:15:06.157 "params": { 00:15:06.157 "impl_name": "uring", 00:15:06.157 "recv_buf_size": 2097152, 00:15:06.157 "send_buf_size": 2097152, 00:15:06.157 "enable_recv_pipe": true, 00:15:06.157 "enable_quickack": false, 00:15:06.157 "enable_placement_id": 0, 00:15:06.157 "enable_zerocopy_send_server": false, 00:15:06.157 "enable_zerocopy_send_client": false, 00:15:06.157 "zerocopy_threshold": 0, 00:15:06.157 "tls_version": 0, 00:15:06.157 "enable_ktls": false 00:15:06.157 } 00:15:06.157 } 00:15:06.157 ] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "vmd", 00:15:06.157 "config": [] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "accel", 00:15:06.157 "config": [ 00:15:06.157 { 00:15:06.157 "method": "accel_set_options", 00:15:06.157 "params": { 00:15:06.157 "small_cache_size": 128, 00:15:06.157 "large_cache_size": 16, 00:15:06.157 "task_count": 2048, 00:15:06.157 "sequence_count": 2048, 00:15:06.157 "buf_count": 2048 00:15:06.157 } 00:15:06.157 } 00:15:06.157 ] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "bdev", 00:15:06.157 "config": [ 00:15:06.157 { 00:15:06.157 "method": "bdev_set_options", 00:15:06.157 "params": { 00:15:06.157 "bdev_io_pool_size": 65535, 00:15:06.157 "bdev_io_cache_size": 256, 00:15:06.157 "bdev_auto_examine": true, 00:15:06.157 "iobuf_small_cache_size": 128, 00:15:06.157 "iobuf_large_cache_size": 16 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "bdev_raid_set_options", 00:15:06.157 "params": { 00:15:06.157 "process_window_size_kb": 1024, 00:15:06.157 "process_max_bandwidth_mb_sec": 0 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "bdev_iscsi_set_options", 00:15:06.157 "params": { 00:15:06.157 "timeout_sec": 30 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "bdev_nvme_set_options", 00:15:06.157 "params": { 00:15:06.157 "action_on_timeout": "none", 00:15:06.157 "timeout_us": 0, 00:15:06.157 "timeout_admin_us": 0, 00:15:06.157 "keep_alive_timeout_ms": 10000, 00:15:06.157 "arbitration_burst": 0, 00:15:06.157 "low_priority_weight": 0, 00:15:06.157 "medium_priority_weight": 0, 00:15:06.157 "high_priority_weight": 0, 00:15:06.157 "nvme_adminq_poll_period_us": 10000, 00:15:06.157 "nvme_ioq_poll_period_us": 0, 00:15:06.157 "io_queue_requests": 0, 00:15:06.157 "delay_cmd_submit": true, 00:15:06.157 "transport_retry_count": 4, 00:15:06.157 "bdev_retry_count": 3, 00:15:06.157 "transport_ack_timeout": 0, 00:15:06.157 "ctrlr_loss_timeout_sec": 0, 00:15:06.157 "reconnect_delay_sec": 0, 00:15:06.157 "fast_io_fail_timeout_sec": 0, 00:15:06.157 "disable_auto_failback": false, 00:15:06.157 "generate_uuids": false, 00:15:06.157 "transport_tos": 0, 00:15:06.157 "nvme_error_stat": false, 00:15:06.157 "rdma_srq_size": 0, 00:15:06.157 "io_path_stat": false, 00:15:06.157 "allow_accel_sequence": false, 00:15:06.157 "rdma_max_cq_size": 0, 00:15:06.157 "rdma_cm_event_timeout_ms": 0, 00:15:06.157 "dhchap_digests": [ 00:15:06.157 "sha256", 00:15:06.157 "sha384", 00:15:06.157 "sha512" 00:15:06.157 ], 00:15:06.157 "dhchap_dhgroups": [ 00:15:06.157 "null", 00:15:06.157 "ffdhe2048", 00:15:06.157 "ffdhe3072", 00:15:06.157 "ffdhe4096", 00:15:06.157 "ffdhe6144", 00:15:06.157 "ffdhe8192" 00:15:06.157 ] 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "bdev_nvme_set_hotplug", 00:15:06.157 "params": { 00:15:06.157 "period_us": 100000, 00:15:06.157 "enable": false 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "bdev_malloc_create", 00:15:06.157 "params": { 00:15:06.157 "name": "malloc0", 00:15:06.157 "num_blocks": 8192, 00:15:06.157 "block_size": 4096, 00:15:06.157 "physical_block_size": 4096, 00:15:06.157 "uuid": "46bedbc1-8471-49db-8fbe-1d96b6af186f", 00:15:06.157 "optimal_io_boundary": 0, 00:15:06.157 "md_size": 0, 00:15:06.157 "dif_type": 0, 00:15:06.157 "dif_is_head_of_md": false, 00:15:06.157 "dif_pi_format": 0 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "bdev_wait_for_examine" 00:15:06.157 } 00:15:06.157 ] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "nbd", 00:15:06.157 "config": [] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "scheduler", 00:15:06.157 "config": [ 00:15:06.157 { 00:15:06.157 "method": "framework_set_scheduler", 00:15:06.157 "params": { 00:15:06.157 "name": "static" 00:15:06.157 } 00:15:06.157 } 00:15:06.157 ] 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "subsystem": "nvmf", 00:15:06.157 "config": [ 00:15:06.157 { 00:15:06.157 "method": "nvmf_set_config", 00:15:06.157 "params": { 00:15:06.157 "discovery_filter": "match_any", 00:15:06.157 "admin_cmd_passthru": { 00:15:06.157 "identify_ctrlr": false 00:15:06.157 }, 00:15:06.157 "dhchap_digests": [ 00:15:06.157 "sha256", 00:15:06.157 "sha384", 00:15:06.157 "sha512" 00:15:06.157 ], 00:15:06.157 "dhchap_dhgroups": [ 00:15:06.157 "null", 00:15:06.157 "ffdhe2048", 00:15:06.157 "ffdhe3072", 00:15:06.157 "ffdhe4096", 00:15:06.157 "ffdhe6144", 00:15:06.157 "ffdhe8192" 00:15:06.157 ] 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "nvmf_set_max_subsystems", 00:15:06.157 "params": { 00:15:06.157 "max_ 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.157 subsystems": 1024 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "nvmf_set_crdt", 00:15:06.157 "params": { 00:15:06.157 "crdt1": 0, 00:15:06.157 "crdt2": 0, 00:15:06.157 "crdt3": 0 00:15:06.157 } 00:15:06.157 }, 00:15:06.157 { 00:15:06.157 "method": "nvmf_create_transport", 00:15:06.157 "params": { 00:15:06.157 "trtype": "TCP", 00:15:06.157 "max_queue_depth": 128, 00:15:06.157 "max_io_qpairs_per_ctrlr": 127, 00:15:06.157 "in_capsule_data_size": 4096, 00:15:06.157 "max_io_size": 131072, 00:15:06.157 "io_unit_size": 131072, 00:15:06.157 "max_aq_depth": 128, 00:15:06.158 "num_shared_buffers": 511, 00:15:06.158 "buf_cache_size": 4294967295, 00:15:06.158 "dif_insert_or_strip": false, 00:15:06.158 "zcopy": false, 00:15:06.158 "c2h_success": false, 00:15:06.158 "sock_priority": 0, 00:15:06.158 "abort_timeout_sec": 1, 00:15:06.158 "ack_timeout": 0, 00:15:06.158 "data_wr_pool_size": 0 00:15:06.158 } 00:15:06.158 }, 00:15:06.158 { 00:15:06.158 "method": "nvmf_create_subsystem", 00:15:06.158 "params": { 00:15:06.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.158 "allow_any_host": false, 00:15:06.158 "serial_number": "00000000000000000000", 00:15:06.158 "model_number": "SPDK bdev Controller", 00:15:06.158 "max_namespaces": 32, 00:15:06.158 "min_cntlid": 1, 00:15:06.158 "max_cntlid": 65519, 00:15:06.158 "ana_reporting": false 00:15:06.158 } 00:15:06.158 }, 00:15:06.158 { 00:15:06.158 "method": "nvmf_subsystem_add_host", 00:15:06.158 "params": { 00:15:06.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.158 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.158 "psk": "key0" 00:15:06.158 } 00:15:06.158 }, 00:15:06.158 { 00:15:06.158 "method": "nvmf_subsystem_add_ns", 00:15:06.158 "params": { 00:15:06.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.158 "namespace": { 00:15:06.158 "nsid": 1, 00:15:06.158 "bdev_name": "malloc0", 00:15:06.158 "nguid": "46BEDBC1847149DB8FBE1D96B6AF186F", 00:15:06.158 "uuid": "46bedbc1-8471-49db-8fbe-1d96b6af186f", 00:15:06.158 "no_auto_visible": false 00:15:06.158 } 00:15:06.158 } 00:15:06.158 }, 00:15:06.158 { 00:15:06.158 "method": "nvmf_subsystem_add_listener", 00:15:06.158 "params": { 00:15:06.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.158 "listen_address": { 00:15:06.158 "trtype": "TCP", 00:15:06.158 "adrfam": "IPv4", 00:15:06.158 "traddr": "10.0.0.3", 00:15:06.158 "trsvcid": "4420" 00:15:06.158 }, 00:15:06.158 "secure_channel": false, 00:15:06.158 "sock_impl": "ssl" 00:15:06.158 } 00:15:06.158 } 00:15:06.158 ] 00:15:06.158 } 00:15:06.158 ] 00:15:06.158 }' 00:15:06.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84226 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84226 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84226 ']' 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.158 00:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.417 [2024-12-17 00:30:52.217992] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:06.417 [2024-12-17 00:30:52.218321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.417 [2024-12-17 00:30:52.360536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.417 [2024-12-17 00:30:52.395748] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.417 [2024-12-17 00:30:52.396025] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.417 [2024-12-17 00:30:52.396060] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.417 [2024-12-17 00:30:52.396068] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.417 [2024-12-17 00:30:52.396074] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.417 [2024-12-17 00:30:52.396146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.676 [2024-12-17 00:30:52.546362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.676 [2024-12-17 00:30:52.605792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.676 [2024-12-17 00:30:52.644812] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:06.676 [2024-12-17 00:30:52.645185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84264 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84264 /var/tmp/bdevperf.sock 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:07.611 "subsystems": [ 00:15:07.611 { 00:15:07.611 "subsystem": "keyring", 00:15:07.611 "config": [ 00:15:07.611 { 00:15:07.611 "method": "keyring_file_add_key", 00:15:07.611 "params": { 00:15:07.611 "name": "key0", 00:15:07.611 "path": "/tmp/tmp.48HBuRedJV" 00:15:07.611 } 00:15:07.611 } 00:15:07.611 ] 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "subsystem": "iobuf", 00:15:07.611 "config": [ 00:15:07.611 { 00:15:07.611 "method": "iobuf_set_options", 00:15:07.611 "params": { 00:15:07.611 "small_pool_count": 8192, 00:15:07.611 "large_pool_count": 1024, 00:15:07.611 "small_bufsize": 8192, 00:15:07.611 "large_bufsize": 135168 00:15:07.611 } 00:15:07.611 } 00:15:07.611 ] 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "subsystem": "sock", 00:15:07.611 "config": [ 00:15:07.611 { 00:15:07.611 "method": "sock_set_default_impl", 00:15:07.611 "params": { 00:15:07.611 "impl_name": "uring" 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "sock_impl_set_options", 00:15:07.611 "params": { 00:15:07.611 "impl_name": "ssl", 00:15:07.611 "recv_buf_size": 4096, 00:15:07.611 "send_buf_size": 4096, 00:15:07.611 "enable_recv_pipe": true, 00:15:07.611 "enable_quickack": false, 00:15:07.611 "enable_placement_id": 0, 00:15:07.611 "enable_zerocopy_send_server": true, 00:15:07.611 "enable_zerocopy_send_client": false, 00:15:07.611 "zerocopy_threshold": 0, 00:15:07.611 "tls_version": 0, 00:15:07.611 "enable_ktls": false 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "sock_impl_set_options", 00:15:07.611 "params": { 00:15:07.611 "impl_name": "posix", 00:15:07.611 "recv_buf_size": 2097152, 00:15:07.611 "send_buf_size": 2097152, 00:15:07.611 "enable_recv_pipe": true, 00:15:07.611 "enable_quickack": false, 00:15:07.611 "enable_placement_id": 0, 00:15:07.611 "enable_zerocopy_send_server": true, 00:15:07.611 "enable_zerocopy_send_client": false, 00:15:07.611 "zerocopy_threshold": 0, 00:15:07.611 "tls_version": 0, 00:15:07.611 "enable_ktls": false 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "sock_impl_set_options", 00:15:07.611 "params": { 00:15:07.611 "impl_name": "uring", 00:15:07.611 "recv_buf_size": 2097152, 00:15:07.611 "send_buf_size": 2097152, 00:15:07.611 "enable_recv_pipe": true, 00:15:07.611 "enable_quickack": false, 00:15:07.611 "enable_placement_id": 0, 00:15:07.611 "enable_zerocopy_send_server": false, 00:15:07.611 "enable_zerocopy_send_client": false, 00:15:07.611 "zerocopy_threshold": 0, 00:15:07.611 "tls_version": 0, 00:15:07.611 "enable_ktls": false 00:15:07.611 } 00:15:07.611 } 00:15:07.611 ] 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "subsystem": "vmd", 00:15:07.611 "config": [] 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "subsystem": "accel", 00:15:07.611 "config": [ 00:15:07.611 { 00:15:07.611 "method": "accel_set_options", 00:15:07.611 "params": { 00:15:07.611 "small_cache_size": 128, 00:15:07.611 "large_cache_size": 16, 00:15:07.611 "task_count": 2048, 00:15:07.611 "sequence_count": 2048, 00:15:07.611 "buf_count": 2048 00:15:07.611 } 00:15:07.611 } 00:15:07.611 ] 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "subsystem": "bdev", 00:15:07.611 "config": [ 00:15:07.611 { 00:15:07.611 "method": "bdev_set_options", 00:15:07.611 "params": { 00:15:07.611 "bdev_io_pool_size": 65535, 00:15:07.611 "bdev_io_cache_size": 256, 00:15:07.611 "bdev_auto_examine": true, 00:15:07.611 "iobuf_small_cache_size": 128, 00:15:07.611 "iobuf_large_cache_size": 16 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "bdev_raid_set_options", 00:15:07.611 "params": { 00:15:07.611 "process_window_size_kb": 1024, 00:15:07.611 "process_max_bandwidth_mb_sec": 0 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "bdev_iscsi_set_options", 00:15:07.611 "params": { 00:15:07.611 "timeout_sec": 30 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "bdev_nvme_set_options", 00:15:07.611 "params": { 00:15:07.611 "action_on_timeout": "none", 00:15:07.611 "timeout_us": 0, 00:15:07.611 "timeout_admin_us": 0, 00:15:07.611 "keep_alive_timeout_ms": 10000, 00:15:07.611 "arbitration_burst": 0, 00:15:07.611 "low_priority_weight": 0, 00:15:07.611 "medium_priority_weight": 0, 00:15:07.611 "high_priority_weight": 0, 00:15:07.611 "nvme_adminq_poll_period_us": 10000, 00:15:07.611 "nvme_ioq_poll_period_us": 0, 00:15:07.611 "io_queue_requests": 512, 00:15:07.611 "delay_cmd_submit": true, 00:15:07.611 "transport_retry_count": 4, 00:15:07.611 "bdev_retry_count": 3, 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:07.611 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84264 ']' 00:15:07.611 "transport_ack_timeout": 0, 00:15:07.611 "ctrlr_loss_timeout_sec": 0, 00:15:07.611 "reconnect_delay_sec": 0, 00:15:07.611 "fast_io_fail_timeout_sec": 0, 00:15:07.611 "disable_auto_failback": false, 00:15:07.611 "generate_uuids": false, 00:15:07.611 "transport_tos": 0, 00:15:07.611 "nvme_error_stat": false, 00:15:07.611 "rdma_srq_size": 0, 00:15:07.611 "io_path_stat": false, 00:15:07.611 "allow_accel_sequence": false, 00:15:07.611 "rdma_max_cq_size": 0, 00:15:07.611 "rdma_cm_event_timeout_ms": 0, 00:15:07.611 "dhchap_digests": [ 00:15:07.611 "sha256", 00:15:07.611 "sha384", 00:15:07.611 "sha512" 00:15:07.611 ], 00:15:07.611 "dhchap_dhgroups": [ 00:15:07.611 "null", 00:15:07.611 "ffdhe2048", 00:15:07.611 "ffdhe3072", 00:15:07.611 "ffdhe4096", 00:15:07.611 "ffdhe6144", 00:15:07.611 "ffdhe8192" 00:15:07.611 ] 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "bdev_nvme_attach_controller", 00:15:07.611 "params": { 00:15:07.611 "name": "nvme0", 00:15:07.611 "trtype": "TCP", 00:15:07.611 "adrfam": "IPv4", 00:15:07.611 "traddr": "10.0.0.3", 00:15:07.611 "trsvcid": "4420", 00:15:07.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.611 "prchk_reftag": false, 00:15:07.611 "prchk_guard": false, 00:15:07.611 "ctrlr_loss_timeout_sec": 0, 00:15:07.611 "reconnect_delay_sec": 0, 00:15:07.611 "fast_io_fail_timeout_sec": 0, 00:15:07.611 "psk": "key0", 00:15:07.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.611 "hdgst": false, 00:15:07.611 "ddgst": false 00:15:07.611 } 00:15:07.611 }, 00:15:07.611 { 00:15:07.611 "method": "bdev_nvme_set_hotplug", 00:15:07.612 "params": { 00:15:07.612 "period_us": 100000, 00:15:07.612 "enable": false 00:15:07.612 } 00:15:07.612 }, 00:15:07.612 { 00:15:07.612 "method": "bdev_enable_histogram", 00:15:07.612 "params": { 00:15:07.612 "name": "nvme0n1", 00:15:07.612 "enable": true 00:15:07.612 } 00:15:07.612 }, 00:15:07.612 { 00:15:07.612 "method": "bdev_wait_for_examine" 00:15:07.612 } 00:15:07.612 ] 00:15:07.612 }, 00:15:07.612 { 00:15:07.612 "subsystem": "nbd", 00:15:07.612 "config": [] 00:15:07.612 } 00:15:07.612 ] 00:15:07.612 }' 00:15:07.612 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.612 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.612 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.612 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.612 00:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.612 [2024-12-17 00:30:53.345517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:07.612 [2024-12-17 00:30:53.345616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84264 ] 00:15:07.612 [2024-12-17 00:30:53.487473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.612 [2024-12-17 00:30:53.529694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.870 [2024-12-17 00:30:53.647545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:07.870 [2024-12-17 00:30:53.679810] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.437 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.437 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:08.437 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:08.437 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:09.005 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.005 00:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:09.005 Running I/O for 1 seconds... 00:15:09.939 4082.00 IOPS, 15.95 MiB/s 00:15:09.939 Latency(us) 00:15:09.939 [2024-12-17T00:30:55.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.939 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:09.939 Verification LBA range: start 0x0 length 0x2000 00:15:09.939 nvme0n1 : 1.03 4086.71 15.96 0.00 0.00 30942.48 7626.01 20375.74 00:15:09.939 [2024-12-17T00:30:55.942Z] =================================================================================================================== 00:15:09.939 [2024-12-17T00:30:55.942Z] Total : 4086.71 15.96 0.00 0.00 30942.48 7626.01 20375.74 00:15:09.939 { 00:15:09.939 "results": [ 00:15:09.939 { 00:15:09.939 "job": "nvme0n1", 00:15:09.939 "core_mask": "0x2", 00:15:09.939 "workload": "verify", 00:15:09.939 "status": "finished", 00:15:09.940 "verify_range": { 00:15:09.940 "start": 0, 00:15:09.940 "length": 8192 00:15:09.940 }, 00:15:09.940 "queue_depth": 128, 00:15:09.940 "io_size": 4096, 00:15:09.940 "runtime": 1.030168, 00:15:09.940 "iops": 4086.7120702642674, 00:15:09.940 "mibps": 15.963719024469794, 00:15:09.940 "io_failed": 0, 00:15:09.940 "io_timeout": 0, 00:15:09.940 "avg_latency_us": 30942.480639170804, 00:15:09.940 "min_latency_us": 7626.007272727273, 00:15:09.940 "max_latency_us": 20375.738181818182 00:15:09.940 } 00:15:09.940 ], 00:15:09.940 "core_count": 1 00:15:09.940 } 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:09.940 00:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:09.940 nvmf_trace.0 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84264 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84264 ']' 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84264 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84264 00:15:10.199 killing process with pid 84264 00:15:10.199 Received shutdown signal, test time was about 1.000000 seconds 00:15:10.199 00:15:10.199 Latency(us) 00:15:10.199 [2024-12-17T00:30:56.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.199 [2024-12-17T00:30:56.202Z] =================================================================================================================== 00:15:10.199 [2024-12-17T00:30:56.202Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84264' 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84264 00:15:10.199 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84264 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.457 rmmod nvme_tcp 00:15:10.457 rmmod nvme_fabrics 00:15:10.457 rmmod nvme_keyring 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 84226 ']' 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 84226 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84226 ']' 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84226 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84226 00:15:10.457 killing process with pid 84226 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84226' 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84226 00:15:10.457 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84226 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.716 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZrkW4zZtlZ /tmp/tmp.bQTsQwCKsc /tmp/tmp.48HBuRedJV 00:15:10.975 00:15:10.975 real 1m22.154s 00:15:10.975 user 2m14.730s 00:15:10.975 sys 0m26.094s 00:15:10.975 ************************************ 00:15:10.975 END TEST nvmf_tls 00:15:10.975 ************************************ 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:10.975 ************************************ 00:15:10.975 START TEST nvmf_fips 00:15:10.975 ************************************ 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:10.975 * Looking for test storage... 00:15:10.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:15:10.975 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.236 00:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:11.236 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.237 --rc genhtml_branch_coverage=1 00:15:11.237 --rc genhtml_function_coverage=1 00:15:11.237 --rc genhtml_legend=1 00:15:11.237 --rc geninfo_all_blocks=1 00:15:11.237 --rc geninfo_unexecuted_blocks=1 00:15:11.237 00:15:11.237 ' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.237 --rc genhtml_branch_coverage=1 00:15:11.237 --rc genhtml_function_coverage=1 00:15:11.237 --rc genhtml_legend=1 00:15:11.237 --rc geninfo_all_blocks=1 00:15:11.237 --rc geninfo_unexecuted_blocks=1 00:15:11.237 00:15:11.237 ' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.237 --rc genhtml_branch_coverage=1 00:15:11.237 --rc genhtml_function_coverage=1 00:15:11.237 --rc genhtml_legend=1 00:15:11.237 --rc geninfo_all_blocks=1 00:15:11.237 --rc geninfo_unexecuted_blocks=1 00:15:11.237 00:15:11.237 ' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:11.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.237 --rc genhtml_branch_coverage=1 00:15:11.237 --rc genhtml_function_coverage=1 00:15:11.237 --rc genhtml_legend=1 00:15:11.237 --rc geninfo_all_blocks=1 00:15:11.237 --rc geninfo_unexecuted_blocks=1 00:15:11.237 00:15:11.237 ' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.237 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:11.237 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:11.238 Error setting digest 00:15:11.238 40527F042B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:11.238 40527F042B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.238 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:11.497 Cannot find device "nvmf_init_br" 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:11.497 Cannot find device "nvmf_init_br2" 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:11.497 Cannot find device "nvmf_tgt_br" 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.497 Cannot find device "nvmf_tgt_br2" 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:11.497 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:11.497 Cannot find device "nvmf_init_br" 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:11.498 Cannot find device "nvmf_init_br2" 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:11.498 Cannot find device "nvmf_tgt_br" 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:11.498 Cannot find device "nvmf_tgt_br2" 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:11.498 Cannot find device "nvmf_br" 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:11.498 Cannot find device "nvmf_init_if" 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:11.498 Cannot find device "nvmf_init_if2" 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.498 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:11.756 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:11.756 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:11.756 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:11.756 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:11.756 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:11.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:11.757 00:15:11.757 --- 10.0.0.3 ping statistics --- 00:15:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.757 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:11.757 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:11.757 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:11.757 00:15:11.757 --- 10.0.0.4 ping statistics --- 00:15:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.757 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:11.757 00:15:11.757 --- 10.0.0.1 ping statistics --- 00:15:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.757 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:11.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:11.757 00:15:11.757 --- 10.0.0.2 ping statistics --- 00:15:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.757 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:11.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=84586 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 84586 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84586 ']' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.757 00:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.016 [2024-12-17 00:30:57.794759] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:12.016 [2024-12-17 00:30:57.795049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.016 [2024-12-17 00:30:57.932580] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.016 [2024-12-17 00:30:57.976883] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.016 [2024-12-17 00:30:57.977160] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.016 [2024-12-17 00:30:57.977370] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.016 [2024-12-17 00:30:57.977507] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.016 [2024-12-17 00:30:57.977523] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.016 [2024-12-17 00:30:57.977558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.016 [2024-12-17 00:30:58.013039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.1AA 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.1AA 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.1AA 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.1AA 00:15:12.275 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.533 [2024-12-17 00:30:58.408530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.533 [2024-12-17 00:30:58.424581] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.533 [2024-12-17 00:30:58.424796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.533 malloc0 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84616 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84616 /var/tmp/bdevperf.sock 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84616 ']' 00:15:12.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.533 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:12.792 [2024-12-17 00:30:58.578898] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:12.792 [2024-12-17 00:30:58.579217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84616 ] 00:15:12.792 [2024-12-17 00:30:58.715955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.792 [2024-12-17 00:30:58.753718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.792 [2024-12-17 00:30:58.784600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.050 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.050 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:13.050 00:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.1AA 00:15:13.309 00:30:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:13.568 [2024-12-17 00:30:59.397908] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.568 TLSTESTn1 00:15:13.568 00:30:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:13.827 Running I/O for 10 seconds... 00:15:15.698 4096.00 IOPS, 16.00 MiB/s [2024-12-17T00:31:02.638Z] 4196.00 IOPS, 16.39 MiB/s [2024-12-17T00:31:03.608Z] 4207.00 IOPS, 16.43 MiB/s [2024-12-17T00:31:04.986Z] 4194.25 IOPS, 16.38 MiB/s [2024-12-17T00:31:05.929Z] 4246.00 IOPS, 16.59 MiB/s [2024-12-17T00:31:06.866Z] 4240.50 IOPS, 16.56 MiB/s [2024-12-17T00:31:07.803Z] 4221.00 IOPS, 16.49 MiB/s [2024-12-17T00:31:08.740Z] 4204.38 IOPS, 16.42 MiB/s [2024-12-17T00:31:09.676Z] 4183.56 IOPS, 16.34 MiB/s [2024-12-17T00:31:09.676Z] 4171.10 IOPS, 16.29 MiB/s 00:15:23.673 Latency(us) 00:15:23.673 [2024-12-17T00:31:09.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.673 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:23.673 Verification LBA range: start 0x0 length 0x2000 00:15:23.673 TLSTESTn1 : 10.02 4176.73 16.32 0.00 0.00 30590.46 5749.29 39083.29 00:15:23.673 [2024-12-17T00:31:09.676Z] =================================================================================================================== 00:15:23.673 [2024-12-17T00:31:09.676Z] Total : 4176.73 16.32 0.00 0.00 30590.46 5749.29 39083.29 00:15:23.673 { 00:15:23.673 "results": [ 00:15:23.673 { 00:15:23.673 "job": "TLSTESTn1", 00:15:23.673 "core_mask": "0x4", 00:15:23.673 "workload": "verify", 00:15:23.673 "status": "finished", 00:15:23.673 "verify_range": { 00:15:23.673 "start": 0, 00:15:23.673 "length": 8192 00:15:23.674 }, 00:15:23.674 "queue_depth": 128, 00:15:23.674 "io_size": 4096, 00:15:23.674 "runtime": 10.016682, 00:15:23.674 "iops": 4176.732375051938, 00:15:23.674 "mibps": 16.315360840046633, 00:15:23.674 "io_failed": 0, 00:15:23.674 "io_timeout": 0, 00:15:23.674 "avg_latency_us": 30590.45747048611, 00:15:23.674 "min_latency_us": 5749.294545454545, 00:15:23.674 "max_latency_us": 39083.28727272727 00:15:23.674 } 00:15:23.674 ], 00:15:23.674 "core_count": 1 00:15:23.674 } 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:23.674 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:23.674 nvmf_trace.0 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84616 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84616 ']' 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84616 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84616 00:15:23.933 killing process with pid 84616 00:15:23.933 Received shutdown signal, test time was about 10.000000 seconds 00:15:23.933 00:15:23.933 Latency(us) 00:15:23.933 [2024-12-17T00:31:09.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.933 [2024-12-17T00:31:09.936Z] =================================================================================================================== 00:15:23.933 [2024-12-17T00:31:09.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84616' 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84616 00:15:23.933 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84616 00:15:24.192 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:24.192 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:24.192 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:24.192 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.192 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:24.192 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.192 00:31:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.192 rmmod nvme_tcp 00:15:24.192 rmmod nvme_fabrics 00:15:24.192 rmmod nvme_keyring 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 84586 ']' 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 84586 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84586 ']' 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84586 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84586 00:15:24.192 killing process with pid 84586 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84586' 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84586 00:15:24.192 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84586 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:24.450 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:24.451 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.1AA 00:15:24.710 ************************************ 00:15:24.710 END TEST nvmf_fips 00:15:24.710 ************************************ 00:15:24.710 00:15:24.710 real 0m13.698s 00:15:24.710 user 0m18.675s 00:15:24.710 sys 0m5.616s 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:24.710 ************************************ 00:15:24.710 START TEST nvmf_control_msg_list 00:15:24.710 ************************************ 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:24.710 * Looking for test storage... 00:15:24.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:24.710 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:24.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.970 --rc genhtml_branch_coverage=1 00:15:24.970 --rc genhtml_function_coverage=1 00:15:24.970 --rc genhtml_legend=1 00:15:24.970 --rc geninfo_all_blocks=1 00:15:24.970 --rc geninfo_unexecuted_blocks=1 00:15:24.970 00:15:24.970 ' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:24.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.970 --rc genhtml_branch_coverage=1 00:15:24.970 --rc genhtml_function_coverage=1 00:15:24.970 --rc genhtml_legend=1 00:15:24.970 --rc geninfo_all_blocks=1 00:15:24.970 --rc geninfo_unexecuted_blocks=1 00:15:24.970 00:15:24.970 ' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:24.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.970 --rc genhtml_branch_coverage=1 00:15:24.970 --rc genhtml_function_coverage=1 00:15:24.970 --rc genhtml_legend=1 00:15:24.970 --rc geninfo_all_blocks=1 00:15:24.970 --rc geninfo_unexecuted_blocks=1 00:15:24.970 00:15:24.970 ' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:24.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.970 --rc genhtml_branch_coverage=1 00:15:24.970 --rc genhtml_function_coverage=1 00:15:24.970 --rc genhtml_legend=1 00:15:24.970 --rc geninfo_all_blocks=1 00:15:24.970 --rc geninfo_unexecuted_blocks=1 00:15:24.970 00:15:24.970 ' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.970 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:24.971 Cannot find device "nvmf_init_br" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:24.971 Cannot find device "nvmf_init_br2" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:24.971 Cannot find device "nvmf_tgt_br" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.971 Cannot find device "nvmf_tgt_br2" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:24.971 Cannot find device "nvmf_init_br" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:24.971 Cannot find device "nvmf_init_br2" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:24.971 Cannot find device "nvmf_tgt_br" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:24.971 Cannot find device "nvmf_tgt_br2" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:24.971 Cannot find device "nvmf_br" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:24.971 Cannot find device "nvmf_init_if" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:24.971 Cannot find device "nvmf_init_if2" 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:24.971 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:25.231 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.231 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.231 00:31:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:25.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:25.231 00:15:25.231 --- 10.0.0.3 ping statistics --- 00:15:25.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.231 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:25.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:25.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:25.231 00:15:25.231 --- 10.0.0.4 ping statistics --- 00:15:25.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.231 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:25.231 00:15:25.231 --- 10.0.0.1 ping statistics --- 00:15:25.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.231 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:25.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:15:25.231 00:15:25.231 --- 10.0.0.2 ping statistics --- 00:15:25.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.231 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=84989 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 84989 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 84989 ']' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.231 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.490 [2024-12-17 00:31:11.250907] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:25.490 [2024-12-17 00:31:11.251023] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.490 [2024-12-17 00:31:11.393070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.490 [2024-12-17 00:31:11.437804] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.490 [2024-12-17 00:31:11.437877] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.490 [2024-12-17 00:31:11.437891] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.490 [2024-12-17 00:31:11.437901] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.490 [2024-12-17 00:31:11.437910] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.490 [2024-12-17 00:31:11.437941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.490 [2024-12-17 00:31:11.473532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.750 [2024-12-17 00:31:11.577387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.750 Malloc0 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:25.750 [2024-12-17 00:31:11.627988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85013 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85014 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85015 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85013 00:15:25.750 00:31:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:26.009 [2024-12-17 00:31:11.802191] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:26.009 [2024-12-17 00:31:11.812674] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:26.009 [2024-12-17 00:31:11.813064] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:26.947 Initializing NVMe Controllers 00:15:26.947 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:26.947 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:26.947 Initialization complete. Launching workers. 00:15:26.947 ======================================================== 00:15:26.947 Latency(us) 00:15:26.947 Device Information : IOPS MiB/s Average min max 00:15:26.947 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3378.00 13.20 295.69 122.97 3776.20 00:15:26.947 ======================================================== 00:15:26.947 Total : 3378.00 13.20 295.69 122.97 3776.20 00:15:26.947 00:15:26.947 Initializing NVMe Controllers 00:15:26.947 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:26.947 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:26.947 Initialization complete. Launching workers. 00:15:26.947 ======================================================== 00:15:26.947 Latency(us) 00:15:26.947 Device Information : IOPS MiB/s Average min max 00:15:26.947 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3391.95 13.25 294.50 154.64 450.62 00:15:26.947 ======================================================== 00:15:26.947 Total : 3391.95 13.25 294.50 154.64 450.62 00:15:26.947 00:15:26.947 Initializing NVMe Controllers 00:15:26.947 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:26.947 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:26.947 Initialization complete. Launching workers. 00:15:26.947 ======================================================== 00:15:26.947 Latency(us) 00:15:26.947 Device Information : IOPS MiB/s Average min max 00:15:26.947 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3376.00 13.19 295.84 166.30 840.22 00:15:26.947 ======================================================== 00:15:26.947 Total : 3376.00 13.19 295.84 166.30 840.22 00:15:26.947 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85014 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85015 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.947 rmmod nvme_tcp 00:15:26.947 rmmod nvme_fabrics 00:15:26.947 rmmod nvme_keyring 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 84989 ']' 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 84989 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 84989 ']' 00:15:26.947 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 84989 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84989 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:27.206 killing process with pid 84989 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84989' 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 84989 00:15:27.206 00:31:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 84989 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:27.206 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.465 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:27.466 00:15:27.466 real 0m2.817s 00:15:27.466 user 0m4.685s 00:15:27.466 sys 0m1.327s 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:27.466 ************************************ 00:15:27.466 END TEST nvmf_control_msg_list 00:15:27.466 ************************************ 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.466 ************************************ 00:15:27.466 START TEST nvmf_wait_for_buf 00:15:27.466 ************************************ 00:15:27.466 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:27.726 * Looking for test storage... 00:15:27.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:27.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.726 --rc genhtml_branch_coverage=1 00:15:27.726 --rc genhtml_function_coverage=1 00:15:27.726 --rc genhtml_legend=1 00:15:27.726 --rc geninfo_all_blocks=1 00:15:27.726 --rc geninfo_unexecuted_blocks=1 00:15:27.726 00:15:27.726 ' 00:15:27.726 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:27.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.726 --rc genhtml_branch_coverage=1 00:15:27.726 --rc genhtml_function_coverage=1 00:15:27.726 --rc genhtml_legend=1 00:15:27.726 --rc geninfo_all_blocks=1 00:15:27.726 --rc geninfo_unexecuted_blocks=1 00:15:27.726 00:15:27.726 ' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.727 --rc genhtml_branch_coverage=1 00:15:27.727 --rc genhtml_function_coverage=1 00:15:27.727 --rc genhtml_legend=1 00:15:27.727 --rc geninfo_all_blocks=1 00:15:27.727 --rc geninfo_unexecuted_blocks=1 00:15:27.727 00:15:27.727 ' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:27.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.727 --rc genhtml_branch_coverage=1 00:15:27.727 --rc genhtml_function_coverage=1 00:15:27.727 --rc genhtml_legend=1 00:15:27.727 --rc geninfo_all_blocks=1 00:15:27.727 --rc geninfo_unexecuted_blocks=1 00:15:27.727 00:15:27.727 ' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.727 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.727 Cannot find device "nvmf_init_br" 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.727 Cannot find device "nvmf_init_br2" 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:27.727 Cannot find device "nvmf_tgt_br" 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.727 Cannot find device "nvmf_tgt_br2" 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:27.727 Cannot find device "nvmf_init_br" 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:27.727 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:27.987 Cannot find device "nvmf_init_br2" 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:27.987 Cannot find device "nvmf_tgt_br" 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:27.987 Cannot find device "nvmf_tgt_br2" 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:27.987 Cannot find device "nvmf_br" 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:27.987 Cannot find device "nvmf_init_if" 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:27.987 Cannot find device "nvmf_init_if2" 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.987 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:28.247 00:31:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:28.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:15:28.247 00:15:28.247 --- 10.0.0.3 ping statistics --- 00:15:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.247 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:28.247 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:28.247 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:15:28.247 00:15:28.247 --- 10.0.0.4 ping statistics --- 00:15:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.247 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:28.247 00:15:28.247 --- 10.0.0.1 ping statistics --- 00:15:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.247 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:28.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:15:28.247 00:15:28.247 --- 10.0.0.2 ping statistics --- 00:15:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.247 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=85250 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 85250 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 85250 ']' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.247 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.247 [2024-12-17 00:31:14.146274] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:28.247 [2024-12-17 00:31:14.146411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.507 [2024-12-17 00:31:14.285782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.507 [2024-12-17 00:31:14.327873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.507 [2024-12-17 00:31:14.327959] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.507 [2024-12-17 00:31:14.327973] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.507 [2024-12-17 00:31:14.327983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.507 [2024-12-17 00:31:14.327992] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.507 [2024-12-17 00:31:14.328028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.507 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.507 [2024-12-17 00:31:14.504089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.766 Malloc0 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.766 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.767 [2024-12-17 00:31:14.540938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:28.767 [2024-12-17 00:31:14.573496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.767 00:31:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:28.767 [2024-12-17 00:31:14.761659] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:30.174 Initializing NVMe Controllers 00:15:30.174 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:30.174 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:30.174 Initialization complete. Launching workers. 00:15:30.174 ======================================================== 00:15:30.174 Latency(us) 00:15:30.174 Device Information : IOPS MiB/s Average min max 00:15:30.174 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.97 62.50 8000.69 5014.22 11028.85 00:15:30.174 ======================================================== 00:15:30.174 Total : 499.97 62.50 8000.69 5014.22 11028.85 00:15:30.174 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.174 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.174 rmmod nvme_tcp 00:15:30.174 rmmod nvme_fabrics 00:15:30.433 rmmod nvme_keyring 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 85250 ']' 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 85250 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 85250 ']' 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 85250 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85250 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.433 killing process with pid 85250 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85250' 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 85250 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 85250 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:30.433 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:30.692 00:15:30.692 real 0m3.220s 00:15:30.692 user 0m2.572s 00:15:30.692 sys 0m0.758s 00:15:30.692 ************************************ 00:15:30.692 END TEST nvmf_wait_for_buf 00:15:30.692 ************************************ 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.692 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:30.952 ************************************ 00:15:30.952 START TEST nvmf_fuzz 00:15:30.952 ************************************ 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:30.952 * Looking for test storage... 00:15:30.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:30.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.952 --rc genhtml_branch_coverage=1 00:15:30.952 --rc genhtml_function_coverage=1 00:15:30.952 --rc genhtml_legend=1 00:15:30.952 --rc geninfo_all_blocks=1 00:15:30.952 --rc geninfo_unexecuted_blocks=1 00:15:30.952 00:15:30.952 ' 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:30.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.952 --rc genhtml_branch_coverage=1 00:15:30.952 --rc genhtml_function_coverage=1 00:15:30.952 --rc genhtml_legend=1 00:15:30.952 --rc geninfo_all_blocks=1 00:15:30.952 --rc geninfo_unexecuted_blocks=1 00:15:30.952 00:15:30.952 ' 00:15:30.952 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:30.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.952 --rc genhtml_branch_coverage=1 00:15:30.952 --rc genhtml_function_coverage=1 00:15:30.952 --rc genhtml_legend=1 00:15:30.952 --rc geninfo_all_blocks=1 00:15:30.953 --rc geninfo_unexecuted_blocks=1 00:15:30.953 00:15:30.953 ' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:30.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.953 --rc genhtml_branch_coverage=1 00:15:30.953 --rc genhtml_function_coverage=1 00:15:30.953 --rc genhtml_legend=1 00:15:30.953 --rc geninfo_all_blocks=1 00:15:30.953 --rc geninfo_unexecuted_blocks=1 00:15:30.953 00:15:30.953 ' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.953 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.953 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:31.212 Cannot find device "nvmf_init_br" 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:31.212 Cannot find device "nvmf_init_br2" 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:31.212 Cannot find device "nvmf_tgt_br" 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.212 Cannot find device "nvmf_tgt_br2" 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:31.212 00:31:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:31.212 Cannot find device "nvmf_init_br" 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:31.212 Cannot find device "nvmf_init_br2" 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:31.212 Cannot find device "nvmf_tgt_br" 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:31.212 Cannot find device "nvmf_tgt_br2" 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:31.212 Cannot find device "nvmf_br" 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:31.212 Cannot find device "nvmf_init_if" 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:31.212 Cannot find device "nvmf_init_if2" 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.212 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:31.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:31.470 00:15:31.470 --- 10.0.0.3 ping statistics --- 00:15:31.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.470 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:31.470 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:31.470 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:31.470 00:15:31.470 --- 10.0.0.4 ping statistics --- 00:15:31.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.470 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:31.470 00:15:31.470 --- 10.0.0.1 ping statistics --- 00:15:31.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.470 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:31.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:15:31.470 00:15:31.470 --- 10.0.0.2 ping statistics --- 00:15:31.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.470 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85507 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85507 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 85507 ']' 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.470 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 Malloc0 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:32.039 00:31:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:32.298 Shutting down the fuzz application 00:15:32.298 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:32.558 Shutting down the fuzz application 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:32.558 rmmod nvme_tcp 00:15:32.558 rmmod nvme_fabrics 00:15:32.558 rmmod nvme_keyring 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 85507 ']' 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 85507 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 85507 ']' 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 85507 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85507 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.558 killing process with pid 85507 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85507' 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 85507 00:15:32.558 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 85507 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.818 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:33.077 00:15:33.077 real 0m2.254s 00:15:33.077 user 0m1.882s 00:15:33.077 sys 0m0.653s 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.077 00:31:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:33.077 ************************************ 00:15:33.077 END TEST nvmf_fuzz 00:15:33.078 ************************************ 00:15:33.078 00:31:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:33.078 00:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:33.078 00:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.078 00:31:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.078 ************************************ 00:15:33.078 START TEST nvmf_multiconnection 00:15:33.078 ************************************ 00:15:33.078 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:33.339 * Looking for test storage... 00:15:33.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.339 --rc genhtml_branch_coverage=1 00:15:33.339 --rc genhtml_function_coverage=1 00:15:33.339 --rc genhtml_legend=1 00:15:33.339 --rc geninfo_all_blocks=1 00:15:33.339 --rc geninfo_unexecuted_blocks=1 00:15:33.339 00:15:33.339 ' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.339 --rc genhtml_branch_coverage=1 00:15:33.339 --rc genhtml_function_coverage=1 00:15:33.339 --rc genhtml_legend=1 00:15:33.339 --rc geninfo_all_blocks=1 00:15:33.339 --rc geninfo_unexecuted_blocks=1 00:15:33.339 00:15:33.339 ' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.339 --rc genhtml_branch_coverage=1 00:15:33.339 --rc genhtml_function_coverage=1 00:15:33.339 --rc genhtml_legend=1 00:15:33.339 --rc geninfo_all_blocks=1 00:15:33.339 --rc geninfo_unexecuted_blocks=1 00:15:33.339 00:15:33.339 ' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:33.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.339 --rc genhtml_branch_coverage=1 00:15:33.339 --rc genhtml_function_coverage=1 00:15:33.339 --rc genhtml_legend=1 00:15:33.339 --rc geninfo_all_blocks=1 00:15:33.339 --rc geninfo_unexecuted_blocks=1 00:15:33.339 00:15:33.339 ' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.339 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:33.340 Cannot find device "nvmf_init_br" 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:33.340 Cannot find device "nvmf_init_br2" 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:33.340 Cannot find device "nvmf_tgt_br" 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.340 Cannot find device "nvmf_tgt_br2" 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:33.340 Cannot find device "nvmf_init_br" 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:33.340 Cannot find device "nvmf_init_br2" 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:33.340 Cannot find device "nvmf_tgt_br" 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:33.340 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:33.599 Cannot find device "nvmf_tgt_br2" 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:33.599 Cannot find device "nvmf_br" 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:33.599 Cannot find device "nvmf_init_if" 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:33.599 Cannot find device "nvmf_init_if2" 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:33.599 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:33.600 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:33.859 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.859 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:33.859 00:15:33.859 --- 10.0.0.3 ping statistics --- 00:15:33.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.859 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:33.859 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:33.859 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:33.859 00:15:33.859 --- 10.0.0.4 ping statistics --- 00:15:33.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.859 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:33.859 00:15:33.859 --- 10.0.0.1 ping statistics --- 00:15:33.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.859 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:33.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:15:33.859 00:15:33.859 --- 10.0.0.2 ping statistics --- 00:15:33.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.859 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=85739 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 85739 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 85739 ']' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.859 00:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:33.859 [2024-12-17 00:31:19.745485] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:33.859 [2024-12-17 00:31:19.746149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.118 [2024-12-17 00:31:19.883087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.118 [2024-12-17 00:31:19.923528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.118 [2024-12-17 00:31:19.923828] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.118 [2024-12-17 00:31:19.924003] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.118 [2024-12-17 00:31:19.924153] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.119 [2024-12-17 00:31:19.924214] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.119 [2024-12-17 00:31:19.924450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.119 [2024-12-17 00:31:19.924808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.119 [2024-12-17 00:31:19.924989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.119 [2024-12-17 00:31:19.925139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.119 [2024-12-17 00:31:19.956196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.119 [2024-12-17 00:31:20.058245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.119 Malloc1 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.119 [2024-12-17 00:31:20.114215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.119 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.378 Malloc2 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.378 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 Malloc3 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 Malloc4 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 Malloc5 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 Malloc6 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 Malloc7 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.379 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 Malloc8 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 Malloc9 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 Malloc10 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 Malloc11 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:34.639 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:34.899 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:34.899 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:34.899 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.899 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:34.899 00:31:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:36.804 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:15:37.063 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:15:37.063 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:37.063 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.063 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:37.063 00:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.967 00:31:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:15:39.225 00:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:15:39.225 00:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:39.225 00:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.225 00:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:39.226 00:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:41.125 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:41.126 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:15:41.126 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:41.126 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:41.126 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.126 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:41.126 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:41.126 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:15:41.385 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:15:41.385 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.385 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.385 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:41.385 00:31:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:43.287 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:15:43.546 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:15:43.546 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:43.546 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.546 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:43.546 00:31:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:45.496 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:15:45.754 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:15:45.754 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:45.754 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.754 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:45.754 00:31:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:47.656 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:15:47.915 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:15:47.915 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:47.915 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:47.915 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:47.915 00:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:49.815 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:15:50.073 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:15:50.073 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:50.073 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:50.073 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:50.073 00:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:51.973 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:15:52.231 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:15:52.231 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:52.231 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.232 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:52.232 00:31:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:54.133 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:54.133 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:15:54.133 00:31:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:54.133 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:54.133 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.133 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:54.133 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:54.133 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:15:54.391 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:15:54.391 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:54.391 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.391 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:54.391 00:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:56.292 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:56.292 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:56.292 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:15:56.292 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:56.292 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.292 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:56.292 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:56.293 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:15:56.551 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:15:56.551 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:56.551 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.551 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:56.551 00:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:58.452 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:58.452 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:58.452 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:15:58.452 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:58.452 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.452 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:58.452 00:31:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:15:58.452 [global] 00:15:58.452 thread=1 00:15:58.452 invalidate=1 00:15:58.452 rw=read 00:15:58.452 time_based=1 00:15:58.452 runtime=10 00:15:58.452 ioengine=libaio 00:15:58.452 direct=1 00:15:58.452 bs=262144 00:15:58.452 iodepth=64 00:15:58.452 norandommap=1 00:15:58.452 numjobs=1 00:15:58.452 00:15:58.452 [job0] 00:15:58.452 filename=/dev/nvme0n1 00:15:58.452 [job1] 00:15:58.452 filename=/dev/nvme10n1 00:15:58.452 [job2] 00:15:58.452 filename=/dev/nvme1n1 00:15:58.452 [job3] 00:15:58.452 filename=/dev/nvme2n1 00:15:58.452 [job4] 00:15:58.452 filename=/dev/nvme3n1 00:15:58.452 [job5] 00:15:58.452 filename=/dev/nvme4n1 00:15:58.452 [job6] 00:15:58.452 filename=/dev/nvme5n1 00:15:58.452 [job7] 00:15:58.452 filename=/dev/nvme6n1 00:15:58.452 [job8] 00:15:58.452 filename=/dev/nvme7n1 00:15:58.452 [job9] 00:15:58.452 filename=/dev/nvme8n1 00:15:58.452 [job10] 00:15:58.452 filename=/dev/nvme9n1 00:15:58.710 Could not set queue depth (nvme0n1) 00:15:58.710 Could not set queue depth (nvme10n1) 00:15:58.710 Could not set queue depth (nvme1n1) 00:15:58.710 Could not set queue depth (nvme2n1) 00:15:58.710 Could not set queue depth (nvme3n1) 00:15:58.710 Could not set queue depth (nvme4n1) 00:15:58.710 Could not set queue depth (nvme5n1) 00:15:58.710 Could not set queue depth (nvme6n1) 00:15:58.710 Could not set queue depth (nvme7n1) 00:15:58.710 Could not set queue depth (nvme8n1) 00:15:58.710 Could not set queue depth (nvme9n1) 00:15:58.710 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:58.710 fio-3.35 00:15:58.710 Starting 11 threads 00:16:10.921 00:16:10.921 job0: (groupid=0, jobs=1): err= 0: pid=86195: Tue Dec 17 00:31:55 2024 00:16:10.921 read: IOPS=198, BW=49.5MiB/s (51.9MB/s)(500MiB/10096msec) 00:16:10.921 slat (usec): min=18, max=87802, avg=5005.46, stdev=12406.51 00:16:10.921 clat (msec): min=18, max=455, avg=317.61, stdev=88.59 00:16:10.921 lat (msec): min=19, max=462, avg=322.62, stdev=89.79 00:16:10.921 clat percentiles (msec): 00:16:10.921 | 1.00th=[ 101], 5.00th=[ 126], 10.00th=[ 155], 20.00th=[ 213], 00:16:10.921 | 30.00th=[ 326], 40.00th=[ 342], 50.00th=[ 355], 60.00th=[ 363], 00:16:10.921 | 70.00th=[ 372], 80.00th=[ 380], 90.00th=[ 393], 95.00th=[ 405], 00:16:10.921 | 99.00th=[ 426], 99.50th=[ 443], 99.90th=[ 456], 99.95th=[ 456], 00:16:10.921 | 99.99th=[ 456] 00:16:10.921 bw ( KiB/s): min=42068, max=92672, per=6.38%, avg=49573.55, stdev=14897.64, samples=20 00:16:10.921 iops : min= 164, max= 362, avg=193.50, stdev=58.24, samples=20 00:16:10.921 lat (msec) : 20=0.05%, 100=1.00%, 250=20.20%, 500=78.75% 00:16:10.921 cpu : usr=0.20%, sys=0.80%, ctx=441, majf=0, minf=4097 00:16:10.921 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.921 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.921 job1: (groupid=0, jobs=1): err= 0: pid=86196: Tue Dec 17 00:31:55 2024 00:16:10.921 read: IOPS=419, BW=105MiB/s (110MB/s)(1053MiB/10041msec) 00:16:10.921 slat (usec): min=20, max=123983, avg=2297.23, stdev=6002.09 00:16:10.921 clat (msec): min=35, max=419, avg=150.08, stdev=44.16 00:16:10.921 lat (msec): min=44, max=419, avg=152.38, stdev=44.32 00:16:10.921 clat percentiles (msec): 00:16:10.921 | 1.00th=[ 85], 5.00th=[ 118], 10.00th=[ 126], 20.00th=[ 132], 00:16:10.921 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 146], 00:16:10.921 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 213], 00:16:10.921 | 99.00th=[ 363], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 422], 00:16:10.921 | 99.99th=[ 422] 00:16:10.921 bw ( KiB/s): min=45056, max=122368, per=13.67%, avg=106228.80, stdev=20577.16, samples=20 00:16:10.921 iops : min= 176, max= 478, avg=414.95, stdev=80.38, samples=20 00:16:10.921 lat (msec) : 50=0.55%, 100=1.21%, 250=94.09%, 500=4.15% 00:16:10.921 cpu : usr=0.22%, sys=1.88%, ctx=865, majf=0, minf=4097 00:16:10.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.921 issued rwts: total=4213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.921 job2: (groupid=0, jobs=1): err= 0: pid=86197: Tue Dec 17 00:31:55 2024 00:16:10.921 read: IOPS=202, BW=50.5MiB/s (53.0MB/s)(510MiB/10101msec) 00:16:10.921 slat (usec): min=22, max=104381, avg=4902.04, stdev=12275.31 00:16:10.921 clat (msec): min=10, max=434, avg=311.40, stdev=95.34 00:16:10.921 lat (msec): min=11, max=490, avg=316.30, stdev=96.68 00:16:10.921 clat percentiles (msec): 00:16:10.921 | 1.00th=[ 31], 5.00th=[ 123], 10.00th=[ 157], 20.00th=[ 213], 00:16:10.921 | 30.00th=[ 317], 40.00th=[ 342], 50.00th=[ 355], 60.00th=[ 363], 00:16:10.921 | 70.00th=[ 372], 80.00th=[ 380], 90.00th=[ 393], 95.00th=[ 401], 00:16:10.921 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 435], 00:16:10.921 | 99.99th=[ 435] 00:16:10.921 bw ( KiB/s): min=39936, max=103424, per=6.51%, avg=50601.60, stdev=17000.88, samples=20 00:16:10.921 iops : min= 156, max= 404, avg=197.60, stdev=66.43, samples=20 00:16:10.921 lat (msec) : 20=0.73%, 50=2.06%, 100=1.03%, 250=20.58%, 500=75.60% 00:16:10.921 cpu : usr=0.12%, sys=0.93%, ctx=414, majf=0, minf=4097 00:16:10.921 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.921 issued rwts: total=2041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.921 job3: (groupid=0, jobs=1): err= 0: pid=86199: Tue Dec 17 00:31:55 2024 00:16:10.921 read: IOPS=77, BW=19.3MiB/s (20.2MB/s)(196MiB/10158msec) 00:16:10.921 slat (usec): min=27, max=533665, avg=12800.62, stdev=43910.28 00:16:10.921 clat (msec): min=19, max=1161, avg=816.13, stdev=235.31 00:16:10.921 lat (msec): min=20, max=1384, avg=828.93, stdev=237.30 00:16:10.921 clat percentiles (msec): 00:16:10.921 | 1.00th=[ 199], 5.00th=[ 275], 10.00th=[ 347], 20.00th=[ 726], 00:16:10.921 | 30.00th=[ 768], 40.00th=[ 802], 50.00th=[ 844], 60.00th=[ 894], 00:16:10.921 | 70.00th=[ 969], 80.00th=[ 1028], 90.00th=[ 1062], 95.00th=[ 1099], 00:16:10.921 | 99.00th=[ 1133], 99.50th=[ 1167], 99.90th=[ 1167], 99.95th=[ 1167], 00:16:10.921 | 99.99th=[ 1167] 00:16:10.921 bw ( KiB/s): min= 1536, max=32702, per=2.37%, avg=18403.10, stdev=9654.60, samples=20 00:16:10.921 iops : min= 6, max= 127, avg=71.80, stdev=37.66, samples=20 00:16:10.921 lat (msec) : 20=0.13%, 250=2.55%, 500=9.45%, 750=15.45%, 1000=46.62% 00:16:10.921 lat (msec) : 2000=25.80% 00:16:10.921 cpu : usr=0.01%, sys=0.41%, ctx=138, majf=0, minf=4097 00:16:10.921 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:16:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.921 issued rwts: total=783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.921 job4: (groupid=0, jobs=1): err= 0: pid=86200: Tue Dec 17 00:31:55 2024 00:16:10.921 read: IOPS=79, BW=19.8MiB/s (20.8MB/s)(201MiB/10151msec) 00:16:10.921 slat (usec): min=20, max=478459, avg=12456.45, stdev=39401.50 00:16:10.921 clat (msec): min=96, max=1214, avg=793.35, stdev=208.45 00:16:10.921 lat (msec): min=96, max=1214, avg=805.80, stdev=210.67 00:16:10.921 clat percentiles (msec): 00:16:10.921 | 1.00th=[ 109], 5.00th=[ 211], 10.00th=[ 659], 20.00th=[ 718], 00:16:10.921 | 30.00th=[ 743], 40.00th=[ 785], 50.00th=[ 835], 60.00th=[ 877], 00:16:10.921 | 70.00th=[ 911], 80.00th=[ 944], 90.00th=[ 1003], 95.00th=[ 1028], 00:16:10.921 | 99.00th=[ 1083], 99.50th=[ 1083], 99.90th=[ 1217], 99.95th=[ 1217], 00:16:10.921 | 99.99th=[ 1217] 00:16:10.921 bw ( KiB/s): min= 8704, max=29125, per=2.44%, avg=18973.25, stdev=5957.61, samples=20 00:16:10.921 iops : min= 34, max= 113, avg=74.00, stdev=23.26, samples=20 00:16:10.921 lat (msec) : 100=0.87%, 250=5.47%, 500=2.24%, 750=24.97%, 1000=56.40% 00:16:10.921 lat (msec) : 2000=10.06% 00:16:10.921 cpu : usr=0.07%, sys=0.39%, ctx=171, majf=0, minf=4097 00:16:10.921 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:16:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.921 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.921 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.921 job5: (groupid=0, jobs=1): err= 0: pid=86201: Tue Dec 17 00:31:55 2024 00:16:10.921 read: IOPS=74, BW=18.6MiB/s (19.5MB/s)(189MiB/10152msec) 00:16:10.921 slat (usec): min=20, max=413098, avg=13302.63, stdev=41519.26 00:16:10.921 clat (msec): min=47, max=1198, avg=847.02, stdev=225.71 00:16:10.921 lat (msec): min=48, max=1198, avg=860.32, stdev=226.29 00:16:10.921 clat percentiles (msec): 00:16:10.921 | 1.00th=[ 159], 5.00th=[ 313], 10.00th=[ 550], 20.00th=[ 709], 00:16:10.921 | 30.00th=[ 751], 40.00th=[ 802], 50.00th=[ 885], 60.00th=[ 953], 00:16:10.921 | 70.00th=[ 1003], 80.00th=[ 1036], 90.00th=[ 1099], 95.00th=[ 1116], 00:16:10.921 | 99.00th=[ 1150], 99.50th=[ 1150], 99.90th=[ 1200], 99.95th=[ 1200], 00:16:10.921 | 99.99th=[ 1200] 00:16:10.921 bw ( KiB/s): min= 6144, max=30720, per=2.27%, avg=17666.90, stdev=8679.82, samples=20 00:16:10.921 iops : min= 24, max= 120, avg=68.90, stdev=33.81, samples=20 00:16:10.921 lat (msec) : 50=0.66%, 250=2.65%, 500=2.79%, 750=23.47%, 1000=38.46% 00:16:10.922 lat (msec) : 2000=31.96% 00:16:10.922 cpu : usr=0.02%, sys=0.35%, ctx=134, majf=0, minf=4097 00:16:10.922 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:16:10.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.922 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.922 job6: (groupid=0, jobs=1): err= 0: pid=86202: Tue Dec 17 00:31:55 2024 00:16:10.922 read: IOPS=418, BW=105MiB/s (110MB/s)(1052MiB/10052msec) 00:16:10.922 slat (usec): min=16, max=157996, avg=2374.36, stdev=6556.52 00:16:10.922 clat (msec): min=20, max=463, avg=150.30, stdev=46.81 00:16:10.922 lat (msec): min=21, max=463, avg=152.67, stdev=47.25 00:16:10.922 clat percentiles (msec): 00:16:10.922 | 1.00th=[ 97], 5.00th=[ 120], 10.00th=[ 126], 20.00th=[ 132], 00:16:10.922 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 146], 00:16:10.922 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 184], 00:16:10.922 | 99.00th=[ 414], 99.50th=[ 422], 99.90th=[ 451], 99.95th=[ 451], 00:16:10.922 | 99.99th=[ 464] 00:16:10.922 bw ( KiB/s): min=45146, max=122368, per=13.65%, avg=106043.30, stdev=21351.20, samples=20 00:16:10.922 iops : min= 176, max= 478, avg=414.15, stdev=83.44, samples=20 00:16:10.922 lat (msec) : 50=0.26%, 100=0.81%, 250=95.84%, 500=3.09% 00:16:10.922 cpu : usr=0.16%, sys=1.88%, ctx=876, majf=0, minf=4097 00:16:10.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:10.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.922 issued rwts: total=4207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.922 job7: (groupid=0, jobs=1): err= 0: pid=86203: Tue Dec 17 00:31:55 2024 00:16:10.922 read: IOPS=79, BW=19.8MiB/s (20.8MB/s)(201MiB/10156msec) 00:16:10.922 slat (usec): min=21, max=399909, avg=12489.89, stdev=40017.42 00:16:10.922 clat (msec): min=51, max=1107, avg=793.57, stdev=166.99 00:16:10.922 lat (msec): min=52, max=1166, avg=806.06, stdev=169.91 00:16:10.922 clat percentiles (msec): 00:16:10.922 | 1.00th=[ 97], 5.00th=[ 443], 10.00th=[ 659], 20.00th=[ 735], 00:16:10.922 | 30.00th=[ 751], 40.00th=[ 785], 50.00th=[ 818], 60.00th=[ 844], 00:16:10.922 | 70.00th=[ 885], 80.00th=[ 919], 90.00th=[ 953], 95.00th=[ 986], 00:16:10.922 | 99.00th=[ 1028], 99.50th=[ 1083], 99.90th=[ 1116], 99.95th=[ 1116], 00:16:10.922 | 99.99th=[ 1116] 00:16:10.922 bw ( KiB/s): min=13312, max=29184, per=2.44%, avg=18969.65, stdev=4362.18, samples=20 00:16:10.922 iops : min= 52, max= 114, avg=74.05, stdev=17.08, samples=20 00:16:10.922 lat (msec) : 100=1.12%, 250=0.87%, 500=5.96%, 750=22.11%, 1000=65.34% 00:16:10.922 lat (msec) : 2000=4.60% 00:16:10.922 cpu : usr=0.06%, sys=0.38%, ctx=162, majf=0, minf=4097 00:16:10.922 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:16:10.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.922 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.922 job8: (groupid=0, jobs=1): err= 0: pid=86204: Tue Dec 17 00:31:55 2024 00:16:10.922 read: IOPS=75, BW=19.0MiB/s (19.9MB/s)(193MiB/10154msec) 00:16:10.922 slat (usec): min=22, max=675384, avg=13001.28, stdev=43653.92 00:16:10.922 clat (msec): min=28, max=1120, avg=828.20, stdev=165.41 00:16:10.922 lat (msec): min=28, max=1384, avg=841.20, stdev=165.43 00:16:10.922 clat percentiles (msec): 00:16:10.922 | 1.00th=[ 54], 5.00th=[ 651], 10.00th=[ 701], 20.00th=[ 743], 00:16:10.922 | 30.00th=[ 760], 40.00th=[ 810], 50.00th=[ 827], 60.00th=[ 869], 00:16:10.922 | 70.00th=[ 902], 80.00th=[ 969], 90.00th=[ 1020], 95.00th=[ 1036], 00:16:10.922 | 99.00th=[ 1062], 99.50th=[ 1083], 99.90th=[ 1116], 99.95th=[ 1116], 00:16:10.922 | 99.99th=[ 1116] 00:16:10.922 bw ( KiB/s): min= 5643, max=26624, per=2.33%, avg=18093.35, stdev=6020.36, samples=20 00:16:10.922 iops : min= 22, max= 104, avg=70.60, stdev=23.47, samples=20 00:16:10.922 lat (msec) : 50=0.39%, 100=1.04%, 250=0.39%, 500=2.20%, 750=17.38% 00:16:10.922 lat (msec) : 1000=65.63%, 2000=12.97% 00:16:10.922 cpu : usr=0.04%, sys=0.38%, ctx=151, majf=0, minf=4097 00:16:10.922 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:16:10.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.922 issued rwts: total=771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.922 job9: (groupid=0, jobs=1): err= 0: pid=86205: Tue Dec 17 00:31:55 2024 00:16:10.922 read: IOPS=186, BW=46.5MiB/s (48.8MB/s)(470MiB/10102msec) 00:16:10.922 slat (usec): min=15, max=110421, avg=5216.80, stdev=12680.10 00:16:10.922 clat (msec): min=15, max=454, avg=338.22, stdev=64.51 00:16:10.922 lat (msec): min=15, max=456, avg=343.44, stdev=65.43 00:16:10.922 clat percentiles (msec): 00:16:10.922 | 1.00th=[ 94], 5.00th=[ 220], 10.00th=[ 251], 20.00th=[ 305], 00:16:10.922 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:16:10.922 | 70.00th=[ 372], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 409], 00:16:10.922 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 456], 99.95th=[ 456], 00:16:10.922 | 99.99th=[ 456] 00:16:10.922 bw ( KiB/s): min=40367, max=60416, per=5.98%, avg=46481.05, stdev=5141.64, samples=20 00:16:10.922 iops : min= 157, max= 236, avg=181.50, stdev=20.14, samples=20 00:16:10.922 lat (msec) : 20=0.27%, 50=0.48%, 100=0.64%, 250=8.30%, 500=90.32% 00:16:10.922 cpu : usr=0.13%, sys=0.84%, ctx=409, majf=0, minf=4097 00:16:10.922 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:10.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.922 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.922 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.922 job10: (groupid=0, jobs=1): err= 0: pid=86206: Tue Dec 17 00:31:55 2024 00:16:10.922 read: IOPS=1254, BW=314MiB/s (329MB/s)(3144MiB/10022msec) 00:16:10.922 slat (usec): min=19, max=13668, avg=790.53, stdev=1713.55 00:16:10.922 clat (usec): min=18726, max=72191, avg=50139.46, stdev=4363.00 00:16:10.922 lat (usec): min=18838, max=72229, avg=50929.99, stdev=4363.16 00:16:10.922 clat percentiles (usec): 00:16:10.922 | 1.00th=[39060], 5.00th=[43254], 10.00th=[44827], 20.00th=[46924], 00:16:10.922 | 30.00th=[48497], 40.00th=[49546], 50.00th=[50594], 60.00th=[51119], 00:16:10.922 | 70.00th=[52167], 80.00th=[53216], 90.00th=[55313], 95.00th=[56361], 00:16:10.922 | 99.00th=[58983], 99.50th=[60031], 99.90th=[62653], 99.95th=[68682], 00:16:10.922 | 99.99th=[69731] 00:16:10.922 bw ( KiB/s): min=309760, max=341504, per=41.22%, avg=320339.10, stdev=8654.09, samples=20 00:16:10.922 iops : min= 1210, max= 1334, avg=1251.25, stdev=33.76, samples=20 00:16:10.922 lat (msec) : 20=0.05%, 50=44.80%, 100=55.15% 00:16:10.922 cpu : usr=0.72%, sys=4.52%, ctx=2426, majf=0, minf=4097 00:16:10.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:10.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:10.922 issued rwts: total=12575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.922 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:10.922 00:16:10.922 Run status group 0 (all jobs): 00:16:10.922 READ: bw=759MiB/s (796MB/s), 18.6MiB/s-314MiB/s (19.5MB/s-329MB/s), io=7709MiB (8083MB), run=10022-10158msec 00:16:10.922 00:16:10.922 Disk stats (read/write): 00:16:10.922 nvme0n1: ios=3872/0, merge=0/0, ticks=1222034/0, in_queue=1222034, util=97.77% 00:16:10.922 nvme10n1: ios=8303/0, merge=0/0, ticks=1234131/0, in_queue=1234131, util=97.85% 00:16:10.922 nvme1n1: ios=3955/0, merge=0/0, ticks=1223221/0, in_queue=1223221, util=98.21% 00:16:10.922 nvme2n1: ios=1439/0, merge=0/0, ticks=1202482/0, in_queue=1202482, util=98.26% 00:16:10.922 nvme3n1: ios=1486/0, merge=0/0, ticks=1179384/0, in_queue=1179384, util=98.20% 00:16:10.922 nvme4n1: ios=1380/0, merge=0/0, ticks=1189963/0, in_queue=1189963, util=98.47% 00:16:10.922 nvme5n1: ios=8290/0, merge=0/0, ticks=1233930/0, in_queue=1233930, util=98.64% 00:16:10.922 nvme6n1: ios=1486/0, merge=0/0, ticks=1186814/0, in_queue=1186814, util=98.69% 00:16:10.922 nvme7n1: ios=1418/0, merge=0/0, ticks=1179311/0, in_queue=1179311, util=98.90% 00:16:10.922 nvme8n1: ios=3632/0, merge=0/0, ticks=1224283/0, in_queue=1224283, util=99.17% 00:16:10.922 nvme9n1: ios=25040/0, merge=0/0, ticks=1241267/0, in_queue=1241267, util=99.13% 00:16:10.922 00:31:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:10.922 [global] 00:16:10.922 thread=1 00:16:10.922 invalidate=1 00:16:10.922 rw=randwrite 00:16:10.922 time_based=1 00:16:10.922 runtime=10 00:16:10.922 ioengine=libaio 00:16:10.922 direct=1 00:16:10.922 bs=262144 00:16:10.922 iodepth=64 00:16:10.922 norandommap=1 00:16:10.922 numjobs=1 00:16:10.922 00:16:10.922 [job0] 00:16:10.922 filename=/dev/nvme0n1 00:16:10.922 [job1] 00:16:10.922 filename=/dev/nvme10n1 00:16:10.922 [job2] 00:16:10.922 filename=/dev/nvme1n1 00:16:10.922 [job3] 00:16:10.922 filename=/dev/nvme2n1 00:16:10.922 [job4] 00:16:10.922 filename=/dev/nvme3n1 00:16:10.922 [job5] 00:16:10.922 filename=/dev/nvme4n1 00:16:10.922 [job6] 00:16:10.922 filename=/dev/nvme5n1 00:16:10.922 [job7] 00:16:10.922 filename=/dev/nvme6n1 00:16:10.922 [job8] 00:16:10.922 filename=/dev/nvme7n1 00:16:10.922 [job9] 00:16:10.922 filename=/dev/nvme8n1 00:16:10.922 [job10] 00:16:10.922 filename=/dev/nvme9n1 00:16:10.923 Could not set queue depth (nvme0n1) 00:16:10.923 Could not set queue depth (nvme10n1) 00:16:10.923 Could not set queue depth (nvme1n1) 00:16:10.923 Could not set queue depth (nvme2n1) 00:16:10.923 Could not set queue depth (nvme3n1) 00:16:10.923 Could not set queue depth (nvme4n1) 00:16:10.923 Could not set queue depth (nvme5n1) 00:16:10.923 Could not set queue depth (nvme6n1) 00:16:10.923 Could not set queue depth (nvme7n1) 00:16:10.923 Could not set queue depth (nvme8n1) 00:16:10.923 Could not set queue depth (nvme9n1) 00:16:10.923 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:10.923 fio-3.35 00:16:10.923 Starting 11 threads 00:16:20.905 00:16:20.905 job0: (groupid=0, jobs=1): err= 0: pid=86400: Tue Dec 17 00:32:05 2024 00:16:20.905 write: IOPS=417, BW=104MiB/s (109MB/s)(1056MiB/10123msec); 0 zone resets 00:16:20.905 slat (usec): min=18, max=65162, avg=2364.14, stdev=4218.15 00:16:20.905 clat (msec): min=67, max=281, avg=151.04, stdev=23.33 00:16:20.905 lat (msec): min=67, max=281, avg=153.40, stdev=23.29 00:16:20.905 clat percentiles (msec): 00:16:20.905 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 142], 00:16:20.905 | 30.00th=[ 146], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 148], 00:16:20.905 | 70.00th=[ 150], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 199], 00:16:20.905 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 284], 00:16:20.905 | 99.99th=[ 284] 00:16:20.905 bw ( KiB/s): min=59273, max=112640, per=10.45%, avg=106442.50, stdev=13385.80, samples=20 00:16:20.905 iops : min= 231, max= 440, avg=415.75, stdev=52.38, samples=20 00:16:20.905 lat (msec) : 100=0.28%, 250=97.30%, 500=2.42% 00:16:20.905 cpu : usr=0.81%, sys=1.27%, ctx=3587, majf=0, minf=1 00:16:20.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:20.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.905 issued rwts: total=0,4222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job1: (groupid=0, jobs=1): err= 0: pid=86401: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=271, BW=67.9MiB/s (71.2MB/s)(689MiB/10144msec); 0 zone resets 00:16:20.906 slat (usec): min=19, max=82624, avg=3617.19, stdev=6530.07 00:16:20.906 clat (msec): min=11, max=387, avg=231.77, stdev=38.44 00:16:20.906 lat (msec): min=11, max=387, avg=235.38, stdev=38.57 00:16:20.906 clat percentiles (msec): 00:16:20.906 | 1.00th=[ 57], 5.00th=[ 148], 10.00th=[ 184], 20.00th=[ 230], 00:16:20.906 | 30.00th=[ 234], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:16:20.906 | 70.00th=[ 249], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:16:20.906 | 99.00th=[ 279], 99.50th=[ 342], 99.90th=[ 372], 99.95th=[ 388], 00:16:20.906 | 99.99th=[ 388] 00:16:20.906 bw ( KiB/s): min=63488, max=102605, per=6.77%, avg=68969.90, stdev=9163.56, samples=20 00:16:20.906 iops : min= 248, max= 400, avg=269.35, stdev=35.64, samples=20 00:16:20.906 lat (msec) : 20=0.15%, 50=0.73%, 100=1.02%, 250=81.68%, 500=16.43% 00:16:20.906 cpu : usr=0.53%, sys=0.72%, ctx=1441, majf=0, minf=1 00:16:20.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:16:20.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.906 issued rwts: total=0,2757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job2: (groupid=0, jobs=1): err= 0: pid=86413: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=293, BW=73.3MiB/s (76.8MB/s)(745MiB/10170msec); 0 zone resets 00:16:20.906 slat (usec): min=17, max=34820, avg=3349.60, stdev=5892.31 00:16:20.906 clat (msec): min=30, max=384, avg=214.96, stdev=31.85 00:16:20.906 lat (msec): min=30, max=385, avg=218.31, stdev=31.77 00:16:20.906 clat percentiles (msec): 00:16:20.906 | 1.00th=[ 161], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 203], 00:16:20.906 | 30.00th=[ 207], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 211], 00:16:20.906 | 70.00th=[ 213], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 288], 00:16:20.906 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 384], 00:16:20.906 | 99.99th=[ 384] 00:16:20.906 bw ( KiB/s): min=49053, max=79872, per=7.33%, avg=74662.45, stdev=8188.80, samples=20 00:16:20.906 iops : min= 191, max= 312, avg=291.60, stdev=32.08, samples=20 00:16:20.906 lat (msec) : 50=0.27%, 100=0.27%, 250=91.85%, 500=7.62% 00:16:20.906 cpu : usr=0.68%, sys=0.78%, ctx=1408, majf=0, minf=1 00:16:20.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:16:20.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.906 issued rwts: total=0,2980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job3: (groupid=0, jobs=1): err= 0: pid=86414: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=258, BW=64.6MiB/s (67.7MB/s)(656MiB/10148msec); 0 zone resets 00:16:20.906 slat (usec): min=17, max=69846, avg=3685.85, stdev=6682.31 00:16:20.906 clat (msec): min=25, max=393, avg=243.91, stdev=33.35 00:16:20.906 lat (msec): min=25, max=393, avg=247.59, stdev=33.37 00:16:20.906 clat percentiles (msec): 00:16:20.906 | 1.00th=[ 123], 5.00th=[ 213], 10.00th=[ 230], 20.00th=[ 234], 00:16:20.906 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:16:20.906 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 313], 00:16:20.906 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 393], 00:16:20.906 | 99.99th=[ 393] 00:16:20.906 bw ( KiB/s): min=49152, max=69632, per=6.43%, avg=65497.45, stdev=4147.07, samples=20 00:16:20.906 iops : min= 192, max= 272, avg=255.80, stdev=16.21, samples=20 00:16:20.906 lat (msec) : 50=0.31%, 100=0.31%, 250=80.59%, 500=18.80% 00:16:20.906 cpu : usr=0.52%, sys=0.83%, ctx=1371, majf=0, minf=1 00:16:20.906 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:16:20.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.906 issued rwts: total=0,2622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job4: (groupid=0, jobs=1): err= 0: pid=86415: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=292, BW=73.1MiB/s (76.7MB/s)(744MiB/10174msec); 0 zone resets 00:16:20.906 slat (usec): min=18, max=60305, avg=3355.99, stdev=5956.92 00:16:20.906 clat (msec): min=62, max=382, avg=215.34, stdev=30.37 00:16:20.906 lat (msec): min=62, max=382, avg=218.70, stdev=30.24 00:16:20.906 clat percentiles (msec): 00:16:20.906 | 1.00th=[ 180], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 203], 00:16:20.906 | 30.00th=[ 207], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 211], 00:16:20.906 | 70.00th=[ 213], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 300], 00:16:20.906 | 99.00th=[ 342], 99.50th=[ 351], 99.90th=[ 368], 99.95th=[ 384], 00:16:20.906 | 99.99th=[ 384] 00:16:20.906 bw ( KiB/s): min=49053, max=79872, per=7.32%, avg=74552.40, stdev=8833.69, samples=20 00:16:20.906 iops : min= 191, max= 312, avg=291.15, stdev=34.59, samples=20 00:16:20.906 lat (msec) : 100=0.27%, 250=92.34%, 500=7.39% 00:16:20.906 cpu : usr=0.56%, sys=0.90%, ctx=3592, majf=0, minf=1 00:16:20.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:16:20.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.906 issued rwts: total=0,2976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job5: (groupid=0, jobs=1): err= 0: pid=86416: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=418, BW=105MiB/s (110MB/s)(1059MiB/10123msec); 0 zone resets 00:16:20.906 slat (usec): min=19, max=22236, avg=2356.76, stdev=4105.32 00:16:20.906 clat (msec): min=24, max=274, avg=150.60, stdev=23.17 00:16:20.906 lat (msec): min=24, max=274, avg=152.96, stdev=23.15 00:16:20.906 clat percentiles (msec): 00:16:20.906 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 142], 00:16:20.906 | 30.00th=[ 146], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 148], 00:16:20.906 | 70.00th=[ 150], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 205], 00:16:20.906 | 99.00th=[ 266], 99.50th=[ 268], 99.90th=[ 275], 99.95th=[ 275], 00:16:20.906 | 99.99th=[ 275] 00:16:20.906 bw ( KiB/s): min=67584, max=114688, per=10.48%, avg=106758.55, stdev=11917.27, samples=20 00:16:20.906 iops : min= 264, max= 448, avg=417.00, stdev=46.60, samples=20 00:16:20.906 lat (msec) : 50=0.19%, 100=0.28%, 250=97.52%, 500=2.01% 00:16:20.906 cpu : usr=0.74%, sys=1.33%, ctx=6872, majf=0, minf=1 00:16:20.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:20.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.906 issued rwts: total=0,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job6: (groupid=0, jobs=1): err= 0: pid=86417: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=288, BW=72.2MiB/s (75.7MB/s)(734MiB/10175msec); 0 zone resets 00:16:20.906 slat (usec): min=21, max=215541, avg=3402.68, stdev=7061.49 00:16:20.906 clat (msec): min=164, max=496, avg=218.23, stdev=37.58 00:16:20.906 lat (msec): min=178, max=496, avg=221.63, stdev=37.49 00:16:20.906 clat percentiles (msec): 00:16:20.906 | 1.00th=[ 194], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 203], 00:16:20.906 | 30.00th=[ 207], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 211], 00:16:20.906 | 70.00th=[ 213], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 309], 00:16:20.906 | 99.00th=[ 376], 99.50th=[ 439], 99.90th=[ 498], 99.95th=[ 498], 00:16:20.906 | 99.99th=[ 498] 00:16:20.906 bw ( KiB/s): min=32833, max=79872, per=7.22%, avg=73562.20, stdev=12068.17, samples=20 00:16:20.906 iops : min= 128, max= 312, avg=287.30, stdev=47.18, samples=20 00:16:20.906 lat (msec) : 250=92.68%, 500=7.32% 00:16:20.906 cpu : usr=0.49%, sys=0.94%, ctx=5936, majf=0, minf=1 00:16:20.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:16:20.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.906 issued rwts: total=0,2937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job7: (groupid=0, jobs=1): err= 0: pid=86418: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=269, BW=67.4MiB/s (70.6MB/s)(683MiB/10139msec); 0 zone resets 00:16:20.906 slat (usec): min=16, max=118862, avg=3656.25, stdev=6797.73 00:16:20.906 clat (msec): min=121, max=381, avg=233.78, stdev=31.06 00:16:20.906 lat (msec): min=121, max=381, avg=237.43, stdev=30.93 00:16:20.906 clat percentiles (msec): 00:16:20.906 | 1.00th=[ 136], 5.00th=[ 150], 10.00th=[ 180], 20.00th=[ 230], 00:16:20.906 | 30.00th=[ 234], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:16:20.906 | 70.00th=[ 249], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 253], 00:16:20.906 | 99.00th=[ 275], 99.50th=[ 334], 99.90th=[ 368], 99.95th=[ 380], 00:16:20.906 | 99.99th=[ 380] 00:16:20.906 bw ( KiB/s): min=64000, max=88064, per=6.71%, avg=68326.40, stdev=6187.58, samples=20 00:16:20.906 iops : min= 250, max= 344, avg=266.90, stdev=24.17, samples=20 00:16:20.906 lat (msec) : 250=84.77%, 500=15.23% 00:16:20.906 cpu : usr=0.54%, sys=0.69%, ctx=1203, majf=0, minf=1 00:16:20.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:16:20.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.906 issued rwts: total=0,2732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.906 job8: (groupid=0, jobs=1): err= 0: pid=86419: Tue Dec 17 00:32:05 2024 00:16:20.906 write: IOPS=780, BW=195MiB/s (205MB/s)(1964MiB/10062msec); 0 zone resets 00:16:20.906 slat (usec): min=19, max=133473, avg=1237.11, stdev=2694.12 00:16:20.906 clat (usec): min=902, max=334794, avg=80697.20, stdev=28663.61 00:16:20.906 lat (usec): min=964, max=334842, avg=81934.31, stdev=28981.29 00:16:20.906 clat percentiles (msec): 00:16:20.907 | 1.00th=[ 15], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 75], 00:16:20.907 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 79], 60.00th=[ 80], 00:16:20.907 | 70.00th=[ 80], 80.00th=[ 81], 90.00th=[ 82], 95.00th=[ 84], 00:16:20.907 | 99.00th=[ 266], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 321], 00:16:20.907 | 99.99th=[ 334] 00:16:20.907 bw ( KiB/s): min=49152, max=228352, per=19.58%, avg=199526.40, stdev=35807.69, samples=20 00:16:20.907 iops : min= 192, max= 892, avg=779.40, stdev=139.87, samples=20 00:16:20.907 lat (usec) : 1000=0.01% 00:16:20.907 lat (msec) : 2=0.10%, 4=0.14%, 10=0.52%, 20=0.56%, 50=1.34% 00:16:20.907 lat (msec) : 100=94.72%, 250=1.06%, 500=1.55% 00:16:20.907 cpu : usr=1.45%, sys=2.23%, ctx=2434, majf=0, minf=1 00:16:20.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:20.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.907 issued rwts: total=0,7857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.907 job9: (groupid=0, jobs=1): err= 0: pid=86420: Tue Dec 17 00:32:05 2024 00:16:20.907 write: IOPS=290, BW=72.7MiB/s (76.2MB/s)(739MiB/10169msec); 0 zone resets 00:16:20.907 slat (usec): min=16, max=153049, avg=3378.15, stdev=6458.70 00:16:20.907 clat (msec): min=154, max=415, avg=216.70, stdev=31.41 00:16:20.907 lat (msec): min=154, max=415, avg=220.08, stdev=31.23 00:16:20.907 clat percentiles (msec): 00:16:20.907 | 1.00th=[ 192], 5.00th=[ 197], 10.00th=[ 199], 20.00th=[ 203], 00:16:20.907 | 30.00th=[ 207], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 211], 00:16:20.907 | 70.00th=[ 213], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 309], 00:16:20.907 | 99.00th=[ 351], 99.50th=[ 368], 99.90th=[ 418], 99.95th=[ 418], 00:16:20.907 | 99.99th=[ 418] 00:16:20.907 bw ( KiB/s): min=38912, max=79712, per=7.27%, avg=74052.80, stdev=10464.93, samples=20 00:16:20.907 iops : min= 152, max= 311, avg=289.25, stdev=40.87, samples=20 00:16:20.907 lat (msec) : 250=92.59%, 500=7.41% 00:16:20.907 cpu : usr=0.52%, sys=0.88%, ctx=3509, majf=0, minf=1 00:16:20.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:16:20.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.907 issued rwts: total=0,2956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.907 job10: (groupid=0, jobs=1): err= 0: pid=86421: Tue Dec 17 00:32:05 2024 00:16:20.907 write: IOPS=417, BW=104MiB/s (109MB/s)(1057MiB/10122msec); 0 zone resets 00:16:20.907 slat (usec): min=18, max=34878, avg=2360.37, stdev=4139.72 00:16:20.907 clat (msec): min=22, max=279, avg=150.81, stdev=24.56 00:16:20.907 lat (msec): min=22, max=279, avg=153.17, stdev=24.57 00:16:20.907 clat percentiles (msec): 00:16:20.907 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 142], 00:16:20.907 | 30.00th=[ 146], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 148], 00:16:20.907 | 70.00th=[ 148], 80.00th=[ 150], 90.00th=[ 153], 95.00th=[ 201], 00:16:20.907 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:16:20.907 | 99.99th=[ 279] 00:16:20.907 bw ( KiB/s): min=63361, max=112640, per=10.46%, avg=106595.50, stdev=12617.91, samples=20 00:16:20.907 iops : min= 247, max= 440, avg=416.35, stdev=49.38, samples=20 00:16:20.907 lat (msec) : 50=0.28%, 100=0.28%, 250=96.88%, 500=2.55% 00:16:20.907 cpu : usr=0.81%, sys=1.17%, ctx=4241, majf=0, minf=1 00:16:20.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:20.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:20.907 issued rwts: total=0,4228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.907 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.907 00:16:20.907 Run status group 0 (all jobs): 00:16:20.907 WRITE: bw=995MiB/s (1043MB/s), 64.6MiB/s-195MiB/s (67.7MB/s-205MB/s), io=9.89GiB (10.6GB), run=10062-10175msec 00:16:20.907 00:16:20.907 Disk stats (read/write): 00:16:20.907 nvme0n1: ios=49/8307, merge=0/0, ticks=39/1213030, in_queue=1213069, util=97.81% 00:16:20.907 nvme10n1: ios=49/5380, merge=0/0, ticks=138/1206593, in_queue=1206731, util=98.09% 00:16:20.907 nvme1n1: ios=43/5835, merge=0/0, ticks=23/1209338, in_queue=1209361, util=98.07% 00:16:20.907 nvme2n1: ios=34/5117, merge=0/0, ticks=105/1209102, in_queue=1209207, util=98.35% 00:16:20.907 nvme3n1: ios=22/5821, merge=0/0, ticks=26/1209800, in_queue=1209826, util=98.10% 00:16:20.907 nvme4n1: ios=0/8339, merge=0/0, ticks=0/1212687, in_queue=1212687, util=98.28% 00:16:20.907 nvme5n1: ios=0/5739, merge=0/0, ticks=0/1208494, in_queue=1208494, util=98.33% 00:16:20.907 nvme6n1: ios=0/5326, merge=0/0, ticks=0/1206163, in_queue=1206163, util=98.26% 00:16:20.907 nvme7n1: ios=0/15548, merge=0/0, ticks=0/1215715, in_queue=1215715, util=98.55% 00:16:20.907 nvme8n1: ios=0/5778, merge=0/0, ticks=0/1208608, in_queue=1208608, util=98.73% 00:16:20.907 nvme9n1: ios=0/8322, merge=0/0, ticks=0/1213114, in_queue=1213114, util=98.88% 00:16:20.907 00:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:20.907 00:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:20.907 00:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.907 00:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:20.907 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:20.907 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:20.907 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.907 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:20.908 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:20.908 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.908 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:21.167 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:21.167 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:16:21.167 00:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:21.168 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:21.168 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.168 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:21.427 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:21.427 rmmod nvme_tcp 00:16:21.427 rmmod nvme_fabrics 00:16:21.427 rmmod nvme_keyring 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 85739 ']' 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 85739 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 85739 ']' 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 85739 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85739 00:16:21.427 killing process with pid 85739 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.427 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.428 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85739' 00:16:21.428 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 85739 00:16:21.428 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 85739 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:21.687 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:21.946 00:16:21.946 real 0m48.831s 00:16:21.946 user 2m45.850s 00:16:21.946 sys 0m26.686s 00:16:21.946 ************************************ 00:16:21.946 END TEST nvmf_multiconnection 00:16:21.946 ************************************ 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:21.946 ************************************ 00:16:21.946 START TEST nvmf_initiator_timeout 00:16:21.946 ************************************ 00:16:21.946 00:32:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:22.207 * Looking for test storage... 00:16:22.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:22.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.207 --rc genhtml_branch_coverage=1 00:16:22.207 --rc genhtml_function_coverage=1 00:16:22.207 --rc genhtml_legend=1 00:16:22.207 --rc geninfo_all_blocks=1 00:16:22.207 --rc geninfo_unexecuted_blocks=1 00:16:22.207 00:16:22.207 ' 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:22.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.207 --rc genhtml_branch_coverage=1 00:16:22.207 --rc genhtml_function_coverage=1 00:16:22.207 --rc genhtml_legend=1 00:16:22.207 --rc geninfo_all_blocks=1 00:16:22.207 --rc geninfo_unexecuted_blocks=1 00:16:22.207 00:16:22.207 ' 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:22.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.207 --rc genhtml_branch_coverage=1 00:16:22.207 --rc genhtml_function_coverage=1 00:16:22.207 --rc genhtml_legend=1 00:16:22.207 --rc geninfo_all_blocks=1 00:16:22.207 --rc geninfo_unexecuted_blocks=1 00:16:22.207 00:16:22.207 ' 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:22.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.207 --rc genhtml_branch_coverage=1 00:16:22.207 --rc genhtml_function_coverage=1 00:16:22.207 --rc genhtml_legend=1 00:16:22.207 --rc geninfo_all_blocks=1 00:16:22.207 --rc geninfo_unexecuted_blocks=1 00:16:22.207 00:16:22.207 ' 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:16:22.207 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.208 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:22.208 Cannot find device "nvmf_init_br" 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:22.208 Cannot find device "nvmf_init_br2" 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:22.208 Cannot find device "nvmf_tgt_br" 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.208 Cannot find device "nvmf_tgt_br2" 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:22.208 Cannot find device "nvmf_init_br" 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:22.208 Cannot find device "nvmf_init_br2" 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:22.208 Cannot find device "nvmf_tgt_br" 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:22.208 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:22.468 Cannot find device "nvmf_tgt_br2" 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:22.468 Cannot find device "nvmf_br" 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:22.468 Cannot find device "nvmf_init_if" 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:22.468 Cannot find device "nvmf_init_if2" 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:22.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:22.468 00:16:22.468 --- 10.0.0.3 ping statistics --- 00:16:22.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.468 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:22.468 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:22.468 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.130 ms 00:16:22.468 00:16:22.468 --- 10.0.0.4 ping statistics --- 00:16:22.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.468 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:16:22.468 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:22.728 00:16:22.728 --- 10.0.0.1 ping statistics --- 00:16:22.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.728 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:22.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:22.728 00:16:22.728 --- 10.0.0.2 ping statistics --- 00:16:22.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.728 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=86847 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 86847 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 86847 ']' 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.728 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.728 [2024-12-17 00:32:08.564636] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:22.728 [2024-12-17 00:32:08.564729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.728 [2024-12-17 00:32:08.705234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.988 [2024-12-17 00:32:08.743630] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.988 [2024-12-17 00:32:08.743695] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.988 [2024-12-17 00:32:08.743705] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.988 [2024-12-17 00:32:08.743712] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.988 [2024-12-17 00:32:08.743718] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.988 [2024-12-17 00:32:08.743863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.988 [2024-12-17 00:32:08.744448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.988 [2024-12-17 00:32:08.744591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.988 [2024-12-17 00:32:08.744730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.988 [2024-12-17 00:32:08.774044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.988 Malloc0 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.988 Delay0 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.988 [2024-12-17 00:32:08.903433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:22.988 [2024-12-17 00:32:08.935617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.988 00:32:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:23.247 00:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:23.247 00:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:16:23.247 00:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.247 00:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:23.247 00:32:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=86904 00:16:25.152 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:25.153 00:32:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:25.153 [global] 00:16:25.153 thread=1 00:16:25.153 invalidate=1 00:16:25.153 rw=write 00:16:25.153 time_based=1 00:16:25.153 runtime=60 00:16:25.153 ioengine=libaio 00:16:25.153 direct=1 00:16:25.153 bs=4096 00:16:25.153 iodepth=1 00:16:25.153 norandommap=0 00:16:25.153 numjobs=1 00:16:25.153 00:16:25.153 verify_dump=1 00:16:25.153 verify_backlog=512 00:16:25.153 verify_state_save=0 00:16:25.153 do_verify=1 00:16:25.153 verify=crc32c-intel 00:16:25.153 [job0] 00:16:25.153 filename=/dev/nvme0n1 00:16:25.153 Could not set queue depth (nvme0n1) 00:16:25.411 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.411 fio-3.35 00:16:25.411 Starting 1 thread 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:28.740 true 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:28.740 true 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:28.740 true 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:28.740 true 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.740 00:32:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 true 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 true 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 true 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:31.272 true 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:31.272 00:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 86904 00:17:27.543 00:17:27.543 job0: (groupid=0, jobs=1): err= 0: pid=86925: Tue Dec 17 00:33:11 2024 00:17:27.543 read: IOPS=802, BW=3209KiB/s (3286kB/s)(188MiB/60000msec) 00:17:27.543 slat (usec): min=10, max=15612, avg=14.71, stdev=79.62 00:17:27.543 clat (usec): min=153, max=40765k, avg=1047.94, stdev=185818.71 00:17:27.543 lat (usec): min=165, max=40765k, avg=1062.65, stdev=185818.73 00:17:27.543 clat percentiles (usec): 00:17:27.543 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:17:27.543 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:17:27.543 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 245], 00:17:27.543 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 322], 99.95th=[ 343], 00:17:27.543 | 99.99th=[ 627] 00:17:27.543 write: IOPS=809, BW=3239KiB/s (3317kB/s)(190MiB/60000msec); 0 zone resets 00:17:27.543 slat (usec): min=13, max=635, avg=21.50, stdev= 7.77 00:17:27.543 clat (usec): min=114, max=1610, avg=157.55, stdev=23.59 00:17:27.543 lat (usec): min=131, max=1628, avg=179.05, stdev=25.23 00:17:27.543 clat percentiles (usec): 00:17:27.543 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 139], 00:17:27.543 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 161], 00:17:27.543 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 198], 00:17:27.543 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 269], 99.95th=[ 306], 00:17:27.543 | 99.99th=[ 537] 00:17:27.543 bw ( KiB/s): min= 6896, max=11944, per=100.00%, avg=9956.84, stdev=1141.51, samples=38 00:17:27.543 iops : min= 1724, max= 2986, avg=2489.21, stdev=285.38, samples=38 00:17:27.543 lat (usec) : 250=98.01%, 500=1.97%, 750=0.01% 00:17:27.543 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:17:27.543 cpu : usr=0.65%, sys=2.23%, ctx=96717, majf=0, minf=5 00:17:27.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.543 issued rwts: total=48128,48582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.543 00:17:27.543 Run status group 0 (all jobs): 00:17:27.543 READ: bw=3209KiB/s (3286kB/s), 3209KiB/s-3209KiB/s (3286kB/s-3286kB/s), io=188MiB (197MB), run=60000-60000msec 00:17:27.543 WRITE: bw=3239KiB/s (3317kB/s), 3239KiB/s-3239KiB/s (3317kB/s-3317kB/s), io=190MiB (199MB), run=60000-60000msec 00:17:27.543 00:17:27.543 Disk stats (read/write): 00:17:27.543 nvme0n1: ios=48285/48128, merge=0/0, ticks=9986/8046, in_queue=18032, util=99.71% 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:27.543 nvmf hotplug test: fio successful as expected 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:27.543 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.544 rmmod nvme_tcp 00:17:27.544 rmmod nvme_fabrics 00:17:27.544 rmmod nvme_keyring 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 86847 ']' 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 86847 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 86847 ']' 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 86847 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86847 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86847' 00:17:27.544 killing process with pid 86847 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 86847 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 86847 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:27.544 00:17:27.544 real 1m4.053s 00:17:27.544 user 3m49.262s 00:17:27.544 sys 0m22.875s 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.544 00:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:27.544 ************************************ 00:17:27.544 END TEST nvmf_initiator_timeout 00:17:27.544 ************************************ 00:17:27.544 00:33:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:27.544 00:33:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:27.544 00:17:27.544 real 6m47.912s 00:17:27.544 user 16m54.615s 00:17:27.544 sys 1m54.938s 00:17:27.544 00:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:27.544 00:33:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.544 ************************************ 00:17:27.544 END TEST nvmf_target_extra 00:17:27.544 ************************************ 00:17:27.544 00:33:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:27.544 00:33:12 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.544 00:33:12 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.544 00:33:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:27.544 ************************************ 00:17:27.544 START TEST nvmf_host 00:17:27.544 ************************************ 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:27.544 * Looking for test storage... 00:17:27.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:27.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.544 --rc genhtml_branch_coverage=1 00:17:27.544 --rc genhtml_function_coverage=1 00:17:27.544 --rc genhtml_legend=1 00:17:27.544 --rc geninfo_all_blocks=1 00:17:27.544 --rc geninfo_unexecuted_blocks=1 00:17:27.544 00:17:27.544 ' 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:27.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.544 --rc genhtml_branch_coverage=1 00:17:27.544 --rc genhtml_function_coverage=1 00:17:27.544 --rc genhtml_legend=1 00:17:27.544 --rc geninfo_all_blocks=1 00:17:27.544 --rc geninfo_unexecuted_blocks=1 00:17:27.544 00:17:27.544 ' 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:27.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.544 --rc genhtml_branch_coverage=1 00:17:27.544 --rc genhtml_function_coverage=1 00:17:27.544 --rc genhtml_legend=1 00:17:27.544 --rc geninfo_all_blocks=1 00:17:27.544 --rc geninfo_unexecuted_blocks=1 00:17:27.544 00:17:27.544 ' 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:27.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.544 --rc genhtml_branch_coverage=1 00:17:27.544 --rc genhtml_function_coverage=1 00:17:27.544 --rc genhtml_legend=1 00:17:27.544 --rc geninfo_all_blocks=1 00:17:27.544 --rc geninfo_unexecuted_blocks=1 00:17:27.544 00:17:27.544 ' 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.544 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.545 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.545 ************************************ 00:17:27.545 START TEST nvmf_identify 00:17:27.545 ************************************ 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:27.545 * Looking for test storage... 00:17:27.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.545 --rc genhtml_branch_coverage=1 00:17:27.545 --rc genhtml_function_coverage=1 00:17:27.545 --rc genhtml_legend=1 00:17:27.545 --rc geninfo_all_blocks=1 00:17:27.545 --rc geninfo_unexecuted_blocks=1 00:17:27.545 00:17:27.545 ' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.545 --rc genhtml_branch_coverage=1 00:17:27.545 --rc genhtml_function_coverage=1 00:17:27.545 --rc genhtml_legend=1 00:17:27.545 --rc geninfo_all_blocks=1 00:17:27.545 --rc geninfo_unexecuted_blocks=1 00:17:27.545 00:17:27.545 ' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.545 --rc genhtml_branch_coverage=1 00:17:27.545 --rc genhtml_function_coverage=1 00:17:27.545 --rc genhtml_legend=1 00:17:27.545 --rc geninfo_all_blocks=1 00:17:27.545 --rc geninfo_unexecuted_blocks=1 00:17:27.545 00:17:27.545 ' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:27.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.545 --rc genhtml_branch_coverage=1 00:17:27.545 --rc genhtml_function_coverage=1 00:17:27.545 --rc genhtml_legend=1 00:17:27.545 --rc geninfo_all_blocks=1 00:17:27.545 --rc geninfo_unexecuted_blocks=1 00:17:27.545 00:17:27.545 ' 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.545 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.546 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:27.546 Cannot find device "nvmf_init_br" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:27.546 Cannot find device "nvmf_init_br2" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:27.546 Cannot find device "nvmf_tgt_br" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.546 Cannot find device "nvmf_tgt_br2" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:27.546 Cannot find device "nvmf_init_br" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:27.546 Cannot find device "nvmf_init_br2" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:27.546 Cannot find device "nvmf_tgt_br" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:27.546 Cannot find device "nvmf_tgt_br2" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:27.546 Cannot find device "nvmf_br" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:27.546 Cannot find device "nvmf_init_if" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:27.546 Cannot find device "nvmf_init_if2" 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.546 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:27.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:17:27.547 00:17:27.547 --- 10.0.0.3 ping statistics --- 00:17:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.547 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:27.547 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:27.547 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:17:27.547 00:17:27.547 --- 10.0.0.4 ping statistics --- 00:17:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.547 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:27.547 00:17:27.547 --- 10.0.0.1 ping statistics --- 00:17:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.547 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:27.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:27.547 00:17:27.547 --- 10.0.0.2 ping statistics --- 00:17:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.547 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87855 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87855 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 87855 ']' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.547 00:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.547 [2024-12-17 00:33:12.942410] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:27.547 [2024-12-17 00:33:12.942520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.547 [2024-12-17 00:33:13.085745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.547 [2024-12-17 00:33:13.126546] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.547 [2024-12-17 00:33:13.126620] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.547 [2024-12-17 00:33:13.126634] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.547 [2024-12-17 00:33:13.126644] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.547 [2024-12-17 00:33:13.126652] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.547 [2024-12-17 00:33:13.126827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.547 [2024-12-17 00:33:13.127217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.547 [2024-12-17 00:33:13.127842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.547 [2024-12-17 00:33:13.127896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.547 [2024-12-17 00:33:13.160388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 [2024-12-17 00:33:13.907011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 Malloc0 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 [2024-12-17 00:33:13.989321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.118 00:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.118 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:28.118 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.118 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 [ 00:17:28.118 { 00:17:28.118 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:28.118 "subtype": "Discovery", 00:17:28.118 "listen_addresses": [ 00:17:28.118 { 00:17:28.118 "trtype": "TCP", 00:17:28.118 "adrfam": "IPv4", 00:17:28.118 "traddr": "10.0.0.3", 00:17:28.118 "trsvcid": "4420" 00:17:28.118 } 00:17:28.118 ], 00:17:28.118 "allow_any_host": true, 00:17:28.118 "hosts": [] 00:17:28.118 }, 00:17:28.118 { 00:17:28.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.118 "subtype": "NVMe", 00:17:28.118 "listen_addresses": [ 00:17:28.118 { 00:17:28.118 "trtype": "TCP", 00:17:28.118 "adrfam": "IPv4", 00:17:28.118 "traddr": "10.0.0.3", 00:17:28.118 "trsvcid": "4420" 00:17:28.118 } 00:17:28.118 ], 00:17:28.118 "allow_any_host": true, 00:17:28.118 "hosts": [], 00:17:28.118 "serial_number": "SPDK00000000000001", 00:17:28.118 "model_number": "SPDK bdev Controller", 00:17:28.118 "max_namespaces": 32, 00:17:28.118 "min_cntlid": 1, 00:17:28.118 "max_cntlid": 65519, 00:17:28.118 "namespaces": [ 00:17:28.118 { 00:17:28.118 "nsid": 1, 00:17:28.118 "bdev_name": "Malloc0", 00:17:28.118 "name": "Malloc0", 00:17:28.118 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:28.119 "eui64": "ABCDEF0123456789", 00:17:28.119 "uuid": "2617193c-2855-49a5-838e-fc3fb781b916" 00:17:28.119 } 00:17:28.119 ] 00:17:28.119 } 00:17:28.119 ] 00:17:28.119 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.119 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:28.119 [2024-12-17 00:33:14.044084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:28.119 [2024-12-17 00:33:14.044137] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87887 ] 00:17:28.381 [2024-12-17 00:33:14.183467] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:28.381 [2024-12-17 00:33:14.183523] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:28.381 [2024-12-17 00:33:14.183529] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:28.381 [2024-12-17 00:33:14.183539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:28.381 [2024-12-17 00:33:14.183548] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:28.381 [2024-12-17 00:33:14.183821] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:28.381 [2024-12-17 00:33:14.183886] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x145dac0 0 00:17:28.381 [2024-12-17 00:33:14.197393] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:28.381 [2024-12-17 00:33:14.197418] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:28.381 [2024-12-17 00:33:14.197441] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:28.381 [2024-12-17 00:33:14.197445] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:28.381 [2024-12-17 00:33:14.197475] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.197482] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.197487] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.381 [2024-12-17 00:33:14.197499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:28.381 [2024-12-17 00:33:14.197529] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.381 [2024-12-17 00:33:14.205385] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.381 [2024-12-17 00:33:14.205406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.381 [2024-12-17 00:33:14.205428] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205433] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.381 [2024-12-17 00:33:14.205443] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:28.381 [2024-12-17 00:33:14.205451] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:28.381 [2024-12-17 00:33:14.205457] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:28.381 [2024-12-17 00:33:14.205473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205478] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205482] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.381 [2024-12-17 00:33:14.205491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.381 [2024-12-17 00:33:14.205517] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.381 [2024-12-17 00:33:14.205577] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.381 [2024-12-17 00:33:14.205584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.381 [2024-12-17 00:33:14.205588] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205592] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.381 [2024-12-17 00:33:14.205598] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:28.381 [2024-12-17 00:33:14.205605] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:28.381 [2024-12-17 00:33:14.205613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205617] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.381 [2024-12-17 00:33:14.205628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.381 [2024-12-17 00:33:14.205646] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.381 [2024-12-17 00:33:14.205706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.381 [2024-12-17 00:33:14.205712] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.381 [2024-12-17 00:33:14.205716] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205720] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.381 [2024-12-17 00:33:14.205726] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:28.381 [2024-12-17 00:33:14.205735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:28.381 [2024-12-17 00:33:14.205743] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205747] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205751] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.381 [2024-12-17 00:33:14.205758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.381 [2024-12-17 00:33:14.205775] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.381 [2024-12-17 00:33:14.205818] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.381 [2024-12-17 00:33:14.205825] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.381 [2024-12-17 00:33:14.205829] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205833] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.381 [2024-12-17 00:33:14.205839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:28.381 [2024-12-17 00:33:14.205849] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205853] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205857] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.381 [2024-12-17 00:33:14.205864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.381 [2024-12-17 00:33:14.205880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.381 [2024-12-17 00:33:14.205924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.381 [2024-12-17 00:33:14.205930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.381 [2024-12-17 00:33:14.205934] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.205938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.381 [2024-12-17 00:33:14.205943] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:28.381 [2024-12-17 00:33:14.205948] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:28.381 [2024-12-17 00:33:14.205956] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:28.381 [2024-12-17 00:33:14.206061] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:28.381 [2024-12-17 00:33:14.206067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:28.381 [2024-12-17 00:33:14.206076] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.206080] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.206084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.381 [2024-12-17 00:33:14.206091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.381 [2024-12-17 00:33:14.206108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.381 [2024-12-17 00:33:14.206154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.381 [2024-12-17 00:33:14.206161] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.381 [2024-12-17 00:33:14.206165] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.206169] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.381 [2024-12-17 00:33:14.206174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:28.381 [2024-12-17 00:33:14.206184] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.206189] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.381 [2024-12-17 00:33:14.206192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.381 [2024-12-17 00:33:14.206199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.381 [2024-12-17 00:33:14.206215] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.381 [2024-12-17 00:33:14.206260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.382 [2024-12-17 00:33:14.206266] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.382 [2024-12-17 00:33:14.206270] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206274] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.382 [2024-12-17 00:33:14.206279] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:28.382 [2024-12-17 00:33:14.206285] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:28.382 [2024-12-17 00:33:14.206293] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:28.382 [2024-12-17 00:33:14.206308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:28.382 [2024-12-17 00:33:14.206333] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.382 [2024-12-17 00:33:14.206381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.382 [2024-12-17 00:33:14.206464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.382 [2024-12-17 00:33:14.206471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.382 [2024-12-17 00:33:14.206475] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206479] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145dac0): datao=0, datal=4096, cccid=0 00:17:28.382 [2024-12-17 00:33:14.206485] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14967c0) on tqpair(0x145dac0): expected_datao=0, payload_size=4096 00:17:28.382 [2024-12-17 00:33:14.206490] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206498] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206502] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.382 [2024-12-17 00:33:14.206517] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.382 [2024-12-17 00:33:14.206521] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.382 [2024-12-17 00:33:14.206534] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:28.382 [2024-12-17 00:33:14.206539] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:28.382 [2024-12-17 00:33:14.206544] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:28.382 [2024-12-17 00:33:14.206549] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:28.382 [2024-12-17 00:33:14.206554] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:28.382 [2024-12-17 00:33:14.206560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:28.382 [2024-12-17 00:33:14.206568] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:28.382 [2024-12-17 00:33:14.206581] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206586] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206590] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.382 [2024-12-17 00:33:14.206617] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.382 [2024-12-17 00:33:14.206671] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.382 [2024-12-17 00:33:14.206678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.382 [2024-12-17 00:33:14.206681] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.382 [2024-12-17 00:33:14.206693] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.382 [2024-12-17 00:33:14.206715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206719] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206724] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.382 [2024-12-17 00:33:14.206737] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206741] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206745] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.382 [2024-12-17 00:33:14.206757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206761] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.382 [2024-12-17 00:33:14.206776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:28.382 [2024-12-17 00:33:14.206788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:28.382 [2024-12-17 00:33:14.206796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206800] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.382 [2024-12-17 00:33:14.206827] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14967c0, cid 0, qid 0 00:17:28.382 [2024-12-17 00:33:14.206834] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496940, cid 1, qid 0 00:17:28.382 [2024-12-17 00:33:14.206839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496ac0, cid 2, qid 0 00:17:28.382 [2024-12-17 00:33:14.206844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.382 [2024-12-17 00:33:14.206849] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496dc0, cid 4, qid 0 00:17:28.382 [2024-12-17 00:33:14.206940] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.382 [2024-12-17 00:33:14.206946] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.382 [2024-12-17 00:33:14.206950] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206954] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496dc0) on tqpair=0x145dac0 00:17:28.382 [2024-12-17 00:33:14.206960] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:28.382 [2024-12-17 00:33:14.206966] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:28.382 [2024-12-17 00:33:14.206977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.206982] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.206989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.382 [2024-12-17 00:33:14.207021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496dc0, cid 4, qid 0 00:17:28.382 [2024-12-17 00:33:14.207075] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.382 [2024-12-17 00:33:14.207082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.382 [2024-12-17 00:33:14.207086] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207090] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145dac0): datao=0, datal=4096, cccid=4 00:17:28.382 [2024-12-17 00:33:14.207095] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1496dc0) on tqpair(0x145dac0): expected_datao=0, payload_size=4096 00:17:28.382 [2024-12-17 00:33:14.207100] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207107] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207111] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207119] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.382 [2024-12-17 00:33:14.207125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.382 [2024-12-17 00:33:14.207129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496dc0) on tqpair=0x145dac0 00:17:28.382 [2024-12-17 00:33:14.207145] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:28.382 [2024-12-17 00:33:14.207179] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.207193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.382 [2024-12-17 00:33:14.207201] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207205] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207209] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x145dac0) 00:17:28.382 [2024-12-17 00:33:14.207215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.382 [2024-12-17 00:33:14.207239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496dc0, cid 4, qid 0 00:17:28.382 [2024-12-17 00:33:14.207246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496f40, cid 5, qid 0 00:17:28.382 [2024-12-17 00:33:14.207345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.382 [2024-12-17 00:33:14.207354] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.382 [2024-12-17 00:33:14.207358] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.382 [2024-12-17 00:33:14.207361] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145dac0): datao=0, datal=1024, cccid=4 00:17:28.383 [2024-12-17 00:33:14.207366] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1496dc0) on tqpair(0x145dac0): expected_datao=0, payload_size=1024 00:17:28.383 [2024-12-17 00:33:14.207370] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207377] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207381] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.383 [2024-12-17 00:33:14.207393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.383 [2024-12-17 00:33:14.207396] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496f40) on tqpair=0x145dac0 00:17:28.383 [2024-12-17 00:33:14.207418] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.383 [2024-12-17 00:33:14.207425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.383 [2024-12-17 00:33:14.207429] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207433] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496dc0) on tqpair=0x145dac0 00:17:28.383 [2024-12-17 00:33:14.207446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145dac0) 00:17:28.383 [2024-12-17 00:33:14.207457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.383 [2024-12-17 00:33:14.207482] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496dc0, cid 4, qid 0 00:17:28.383 [2024-12-17 00:33:14.207548] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.383 [2024-12-17 00:33:14.207555] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.383 [2024-12-17 00:33:14.207559] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207562] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145dac0): datao=0, datal=3072, cccid=4 00:17:28.383 [2024-12-17 00:33:14.207567] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1496dc0) on tqpair(0x145dac0): expected_datao=0, payload_size=3072 00:17:28.383 [2024-12-17 00:33:14.207572] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207578] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207582] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207590] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.383 [2024-12-17 00:33:14.207613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.383 [2024-12-17 00:33:14.207617] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207621] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496dc0) on tqpair=0x145dac0 00:17:28.383 [2024-12-17 00:33:14.207630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.383 [2024-12-17 00:33:14.207635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x145dac0) 00:17:28.383 [2024-12-17 00:33:14.207642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.383 [2024-12-17 00:33:14.207665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496dc0, cid 4, qid 0 00:17:28.383 ===================================================== 00:17:28.383 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:28.383 ===================================================== 00:17:28.383 Controller Capabilities/Features 00:17:28.383 ================================ 00:17:28.383 Vendor ID: 0000 00:17:28.383 Subsystem Vendor ID: 0000 00:17:28.383 Serial Number: .................... 00:17:28.383 Model Number: ........................................ 00:17:28.383 Firmware Version: 24.09.1 00:17:28.383 Recommended Arb Burst: 0 00:17:28.383 IEEE OUI Identifier: 00 00 00 00:17:28.383 Multi-path I/O 00:17:28.383 May have multiple subsystem ports: No 00:17:28.383 May have multiple controllers: No 00:17:28.383 Associated with SR-IOV VF: No 00:17:28.383 Max Data Transfer Size: 131072 00:17:28.383 Max Number of Namespaces: 0 00:17:28.383 Max Number of I/O Queues: 1024 00:17:28.383 NVMe Specification Version (VS): 1.3 00:17:28.383 NVMe Specification Version (Identify): 1.3 00:17:28.383 Maximum Queue Entries: 128 00:17:28.383 Contiguous Queues Required: Yes 00:17:28.383 Arbitration Mechanisms Supported 00:17:28.383 Weighted Round Robin: Not Supported 00:17:28.383 Vendor Specific: Not Supported 00:17:28.383 Reset Timeout: 15000 ms 00:17:28.383 Doorbell Stride: 4 bytes 00:17:28.383 NVM Subsystem Reset: Not Supported 00:17:28.383 Command Sets Supported 00:17:28.383 NVM Command Set: Supported 00:17:28.383 Boot Partition: Not Supported 00:17:28.383 Memory Page Size Minimum: 4096 bytes 00:17:28.383 Memory Page Size Maximum: 4096 bytes 00:17:28.383 Persistent Memory Region: Not Supported 00:17:28.383 Optional Asynchronous Events Supported 00:17:28.383 Namespace Attribute Notices: Not Supported 00:17:28.383 Firmware Activation Notices: Not Supported 00:17:28.383 ANA Change Notices: Not Supported 00:17:28.383 PLE Aggregate Log Change Notices: Not Supported 00:17:28.383 LBA Status Info Alert Notices: Not Supported 00:17:28.383 EGE Aggregate Log Change Notices: Not Supported 00:17:28.383 Normal NVM Subsystem Shutdown event: Not Supported 00:17:28.383 Zone Descriptor Change Notices: Not Supported 00:17:28.383 Discovery Log Change Notices: Supported 00:17:28.383 Controller Attributes 00:17:28.383 128-bit Host Identifier: Not Supported 00:17:28.383 Non-Operational Permissive Mode: Not Supported 00:17:28.383 NVM Sets: Not Supported 00:17:28.383 Read Recovery Levels: Not Supported 00:17:28.383 Endurance Groups: Not Supported 00:17:28.383 Predictable Latency Mode: Not Supported 00:17:28.383 Traffic Based Keep ALive: Not Supported 00:17:28.383 Namespace Granularity: Not Supported 00:17:28.383 SQ Associations: Not Supported 00:17:28.383 UUID List: Not Supported 00:17:28.383 Multi-Domain Subsystem: Not Supported 00:17:28.383 Fixed Capacity Management: Not Supported 00:17:28.383 Variable Capacity Management: Not Supported 00:17:28.383 Delete Endurance Group: Not Supported 00:17:28.383 Delete NVM Set: Not Supported 00:17:28.383 Extended LBA Formats Supported: Not Supported 00:17:28.383 Flexible Data Placement Supported: Not Supported 00:17:28.383 00:17:28.383 Controller Memory Buffer Support 00:17:28.383 ================================ 00:17:28.383 Supported: No 00:17:28.383 00:17:28.383 Persistent Memory Region Support 00:17:28.383 ================================ 00:17:28.383 Supported: No 00:17:28.383 00:17:28.383 Admin Command Set Attributes 00:17:28.383 ============================ 00:17:28.383 Security Send/Receive: Not Supported 00:17:28.383 Format NVM: Not Supported 00:17:28.383 Firmware Activate/Download: Not Supported 00:17:28.383 Namespace Management: Not Supported 00:17:28.383 Device Self-Test: Not Supported 00:17:28.383 Directives: Not Supported 00:17:28.383 NVMe-MI: Not Supported 00:17:28.383 Virtualization Management: Not Supported 00:17:28.383 Doorbell Buffer Config: Not Supported 00:17:28.383 Get LBA Status Capability: Not Supported 00:17:28.383 Command & Feature Lockdown Capability: Not Supported 00:17:28.383 Abort Command Limit: 1 00:17:28.383 Async Event Request Limit: 4 00:17:28.383 Number of Firmware Slots: N/A 00:17:28.383 Firmware Slot 1 Read-Only: N/A 00:17:28.383 Firmware Activation Without Reset: N/A 00:17:28.383 Multiple Update Detection Support: N/A 00:17:28.383 Firmware Update Granularity: No Information Provided 00:17:28.383 Per-Namespace SMART Log: No 00:17:28.383 Asymmetric Namespace Access Log Page: Not Supported 00:17:28.383 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:28.383 Command Effects Log Page: Not Supported 00:17:28.383 Get Log Page Extended Data: Supported 00:17:28.383 Telemetry Log Pages: Not Supported 00:17:28.383 Persistent Event Log Pages: Not Supported 00:17:28.383 Supported Log Pages Log Page: May Support 00:17:28.383 Commands Supported & Effects Log Page: Not Supported 00:17:28.383 Feature Identifiers & Effects Log Page:May Support 00:17:28.383 NVMe-MI Commands & Effects Log Page: May Support 00:17:28.383 Data Area 4 for Telemetry Log: Not Supported 00:17:28.383 Error Log Page Entries Supported: 128 00:17:28.383 Keep Alive: Not Supported 00:17:28.383 00:17:28.383 NVM Command Set Attributes 00:17:28.383 ========================== 00:17:28.383 Submission Queue Entry Size 00:17:28.383 Max: 1 00:17:28.383 Min: 1 00:17:28.383 Completion Queue Entry Size 00:17:28.383 Max: 1 00:17:28.383 Min: 1 00:17:28.383 Number of Namespaces: 0 00:17:28.383 Compare Command: Not Supported 00:17:28.383 Write Uncorrectable Command: Not Supported 00:17:28.383 Dataset Management Command: Not Supported 00:17:28.383 Write Zeroes Command: Not Supported 00:17:28.383 Set Features Save Field: Not Supported 00:17:28.383 Reservations: Not Supported 00:17:28.383 Timestamp: Not Supported 00:17:28.383 Copy: Not Supported 00:17:28.383 Volatile Write Cache: Not Present 00:17:28.383 Atomic Write Unit (Normal): 1 00:17:28.383 Atomic Write Unit (PFail): 1 00:17:28.383 Atomic Compare & Write Unit: 1 00:17:28.383 Fused Compare & Write: Supported 00:17:28.383 Scatter-Gather List 00:17:28.383 SGL Command Set: Supported 00:17:28.383 SGL Keyed: Supported 00:17:28.383 SGL Bit Bucket Descriptor: Not Supported 00:17:28.383 SGL Metadata Pointer: Not Supported 00:17:28.383 Oversized SGL: Not Supported 00:17:28.383 SGL Metadata Address: Not Supported 00:17:28.383 SGL Offset: Supported 00:17:28.383 Transport SGL Data Block: Not Supported 00:17:28.383 Replay Protected Memory Block: Not Supported 00:17:28.383 00:17:28.383 Firmware Slot Information 00:17:28.383 ========================= 00:17:28.383 Active slot: 0 00:17:28.383 00:17:28.383 00:17:28.383 Error Log 00:17:28.383 ========= 00:17:28.383 00:17:28.384 Active Namespaces 00:17:28.384 ================= 00:17:28.384 Discovery Log Page 00:17:28.384 ================== 00:17:28.384 Generation Counter: 2 00:17:28.384 Number of Records: 2 00:17:28.384 Record Format: 0 00:17:28.384 00:17:28.384 Discovery Log Entry 0 00:17:28.384 ---------------------- 00:17:28.384 Transport Type: 3 (TCP) 00:17:28.384 Address Family: 1 (IPv4) 00:17:28.384 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:28.384 Entry Flags: 00:17:28.384 Duplicate Returned Information: 1 00:17:28.384 Explicit Persistent Connection Support for Discovery: 1 00:17:28.384 Transport Requirements: 00:17:28.384 Secure Channel: Not Required 00:17:28.384 Port ID: 0 (0x0000) 00:17:28.384 Controller ID: 65535 (0xffff) 00:17:28.384 Admin Max SQ Size: 128 00:17:28.384 Transport Service Identifier: 4420 00:17:28.384 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:28.384 Transport Address: 10.0.0.3 00:17:28.384 Discovery Log Entry 1 00:17:28.384 ---------------------- 00:17:28.384 Transport Type: 3 (TCP) 00:17:28.384 Address Family: 1 (IPv4) 00:17:28.384 Subsystem Type: 2 (NVM Subsystem) 00:17:28.384 Entry Flags: 00:17:28.384 Duplicate Returned Information: 0 00:17:28.384 Explicit Persistent Connection Support for Discovery: 0 00:17:28.384 Transport Requirements: 00:17:28.384 Secure Channel: Not Required 00:17:28.384 Port ID: 0 (0x0000) 00:17:28.384 Controller ID: 65535 (0xffff) 00:17:28.384 Admin Max SQ Size: 128 00:17:28.384 Transport Service Identifier: 4420 00:17:28.384 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:28.384 Transport Address: 10.0.0.3 [2024-12-17 00:33:14.207743] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.384 [2024-12-17 00:33:14.207750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.384 [2024-12-17 00:33:14.207753] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.207757] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x145dac0): datao=0, datal=8, cccid=4 00:17:28.384 [2024-12-17 00:33:14.207762] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1496dc0) on tqpair(0x145dac0): expected_datao=0, payload_size=8 00:17:28.384 [2024-12-17 00:33:14.207766] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.207773] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.207777] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.207791] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.207798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.207802] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.207806] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496dc0) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.207913] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:28.384 [2024-12-17 00:33:14.207930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14967c0) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.207939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.384 [2024-12-17 00:33:14.207944] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496940) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.207949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.384 [2024-12-17 00:33:14.207955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496ac0) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.207959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.384 [2024-12-17 00:33:14.207964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.207969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.384 [2024-12-17 00:33:14.207979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.207983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.207987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.384 [2024-12-17 00:33:14.207995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.384 [2024-12-17 00:33:14.208020] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.384 [2024-12-17 00:33:14.208074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.208082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.208086] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208090] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.208097] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208102] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208105] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.384 [2024-12-17 00:33:14.208113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.384 [2024-12-17 00:33:14.208134] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.384 [2024-12-17 00:33:14.208194] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.208201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.208205] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208209] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.208214] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:28.384 [2024-12-17 00:33:14.208219] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:28.384 [2024-12-17 00:33:14.208228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208237] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.384 [2024-12-17 00:33:14.208244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.384 [2024-12-17 00:33:14.208260] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.384 [2024-12-17 00:33:14.208303] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.208344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.208349] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208353] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.208365] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208370] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208374] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.384 [2024-12-17 00:33:14.208382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.384 [2024-12-17 00:33:14.208402] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.384 [2024-12-17 00:33:14.208447] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.208454] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.208457] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208462] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.208498] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208504] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208508] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.384 [2024-12-17 00:33:14.208516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.384 [2024-12-17 00:33:14.208535] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.384 [2024-12-17 00:33:14.208579] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.208586] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.208590] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208594] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.208605] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208610] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208614] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.384 [2024-12-17 00:33:14.208621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.384 [2024-12-17 00:33:14.208639] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.384 [2024-12-17 00:33:14.208685] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.208692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.208696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208700] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.384 [2024-12-17 00:33:14.208710] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208715] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.384 [2024-12-17 00:33:14.208719] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.384 [2024-12-17 00:33:14.208726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.384 [2024-12-17 00:33:14.208744] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.384 [2024-12-17 00:33:14.208804] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.384 [2024-12-17 00:33:14.208810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.384 [2024-12-17 00:33:14.208814] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.208818] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.385 [2024-12-17 00:33:14.208828] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.208833] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.208837] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.385 [2024-12-17 00:33:14.208844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.385 [2024-12-17 00:33:14.208874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.385 [2024-12-17 00:33:14.208918] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.385 [2024-12-17 00:33:14.208925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.385 [2024-12-17 00:33:14.208929] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.208933] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.385 [2024-12-17 00:33:14.208943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.208947] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.208951] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.385 [2024-12-17 00:33:14.208958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.385 [2024-12-17 00:33:14.208973] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.385 [2024-12-17 00:33:14.209018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.385 [2024-12-17 00:33:14.209024] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.385 [2024-12-17 00:33:14.209028] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.385 [2024-12-17 00:33:14.209042] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209050] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.385 [2024-12-17 00:33:14.209057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.385 [2024-12-17 00:33:14.209073] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.385 [2024-12-17 00:33:14.209113] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.385 [2024-12-17 00:33:14.209120] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.385 [2024-12-17 00:33:14.209123] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209127] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.385 [2024-12-17 00:33:14.209137] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209145] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.385 [2024-12-17 00:33:14.209152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.385 [2024-12-17 00:33:14.209168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.385 [2024-12-17 00:33:14.209212] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.385 [2024-12-17 00:33:14.209218] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.385 [2024-12-17 00:33:14.209222] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209226] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.385 [2024-12-17 00:33:14.209236] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209240] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209244] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.385 [2024-12-17 00:33:14.209251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.385 [2024-12-17 00:33:14.209267] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.385 [2024-12-17 00:33:14.209313] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.385 [2024-12-17 00:33:14.209335] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.385 [2024-12-17 00:33:14.209339] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209343] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.385 [2024-12-17 00:33:14.209353] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.209362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x145dac0) 00:17:28.385 [2024-12-17 00:33:14.213415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.385 [2024-12-17 00:33:14.213445] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1496c40, cid 3, qid 0 00:17:28.385 [2024-12-17 00:33:14.213503] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.385 [2024-12-17 00:33:14.213511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.385 [2024-12-17 00:33:14.213514] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.385 [2024-12-17 00:33:14.213519] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1496c40) on tqpair=0x145dac0 00:17:28.385 [2024-12-17 00:33:14.213527] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:28.385 00:17:28.385 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:28.385 [2024-12-17 00:33:14.254444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:28.385 [2024-12-17 00:33:14.254497] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87893 ] 00:17:28.649 [2024-12-17 00:33:14.392628] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:28.649 [2024-12-17 00:33:14.392686] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:28.649 [2024-12-17 00:33:14.392693] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:28.649 [2024-12-17 00:33:14.392705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:28.649 [2024-12-17 00:33:14.392713] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:28.649 [2024-12-17 00:33:14.392944] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:28.649 [2024-12-17 00:33:14.393022] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf0fac0 0 00:17:28.649 [2024-12-17 00:33:14.405425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:28.649 [2024-12-17 00:33:14.405450] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:28.649 [2024-12-17 00:33:14.405456] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:28.649 [2024-12-17 00:33:14.405460] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:28.649 [2024-12-17 00:33:14.405488] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.649 [2024-12-17 00:33:14.405495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.649 [2024-12-17 00:33:14.405499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.405511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:28.650 [2024-12-17 00:33:14.405541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.413398] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.413419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.413424] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413429] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.413438] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:28.650 [2024-12-17 00:33:14.413445] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:28.650 [2024-12-17 00:33:14.413451] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:28.650 [2024-12-17 00:33:14.413464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413469] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413473] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.413482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.650 [2024-12-17 00:33:14.413508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.413554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.413560] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.413564] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413568] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.413573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:28.650 [2024-12-17 00:33:14.413596] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:28.650 [2024-12-17 00:33:14.413620] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413625] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413628] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.413636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.650 [2024-12-17 00:33:14.413671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.413714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.413721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.413725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.413735] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:28.650 [2024-12-17 00:33:14.413744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:28.650 [2024-12-17 00:33:14.413751] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413759] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.413783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.650 [2024-12-17 00:33:14.413802] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.413849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.413862] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.413866] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.413877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:28.650 [2024-12-17 00:33:14.413889] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413894] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.413898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.413906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.650 [2024-12-17 00:33:14.413924] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.413971] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.413982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.413987] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.414011] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:28.650 [2024-12-17 00:33:14.414016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:28.650 [2024-12-17 00:33:14.414025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:28.650 [2024-12-17 00:33:14.414131] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:28.650 [2024-12-17 00:33:14.414152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:28.650 [2024-12-17 00:33:14.414162] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.414178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.650 [2024-12-17 00:33:14.414198] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.414258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.414264] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.414268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.414277] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:28.650 [2024-12-17 00:33:14.414287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414292] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414296] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.414303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.650 [2024-12-17 00:33:14.414320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.414367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.414374] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.414378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414382] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.414387] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:28.650 [2024-12-17 00:33:14.414392] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:28.650 [2024-12-17 00:33:14.414400] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:28.650 [2024-12-17 00:33:14.414414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:28.650 [2024-12-17 00:33:14.414424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414428] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.650 [2024-12-17 00:33:14.414436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.650 [2024-12-17 00:33:14.414456] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.650 [2024-12-17 00:33:14.414539] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.650 [2024-12-17 00:33:14.414546] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.650 [2024-12-17 00:33:14.414550] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414554] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=4096, cccid=0 00:17:28.650 [2024-12-17 00:33:14.414559] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf487c0) on tqpair(0xf0fac0): expected_datao=0, payload_size=4096 00:17:28.650 [2024-12-17 00:33:14.414563] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414571] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414575] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414584] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.650 [2024-12-17 00:33:14.414590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.650 [2024-12-17 00:33:14.414593] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.650 [2024-12-17 00:33:14.414597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.650 [2024-12-17 00:33:14.414606] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:28.650 [2024-12-17 00:33:14.414611] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:28.650 [2024-12-17 00:33:14.414615] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:28.650 [2024-12-17 00:33:14.414620] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:28.650 [2024-12-17 00:33:14.414624] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:28.650 [2024-12-17 00:33:14.414629] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:28.650 [2024-12-17 00:33:14.414638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.414649] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414654] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414658] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.414666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.651 [2024-12-17 00:33:14.414686] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.651 [2024-12-17 00:33:14.414736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.651 [2024-12-17 00:33:14.414743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.651 [2024-12-17 00:33:14.414746] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414750] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.651 [2024-12-17 00:33:14.414758] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414762] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.414773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.651 [2024-12-17 00:33:14.414779] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414783] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.414793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.651 [2024-12-17 00:33:14.414799] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.414828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.651 [2024-12-17 00:33:14.414835] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414839] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414843] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.414849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.651 [2024-12-17 00:33:14.414854] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.414867] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.414874] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.414878] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.414886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.651 [2024-12-17 00:33:14.414906] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf487c0, cid 0, qid 0 00:17:28.651 [2024-12-17 00:33:14.414913] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48940, cid 1, qid 0 00:17:28.651 [2024-12-17 00:33:14.414918] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48ac0, cid 2, qid 0 00:17:28.651 [2024-12-17 00:33:14.414923] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.651 [2024-12-17 00:33:14.414928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48dc0, cid 4, qid 0 00:17:28.651 [2024-12-17 00:33:14.415016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.651 [2024-12-17 00:33:14.415023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.651 [2024-12-17 00:33:14.415026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415030] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48dc0) on tqpair=0xf0fac0 00:17:28.651 [2024-12-17 00:33:14.415036] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:28.651 [2024-12-17 00:33:14.415042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415068] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415076] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.415084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.651 [2024-12-17 00:33:14.415102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48dc0, cid 4, qid 0 00:17:28.651 [2024-12-17 00:33:14.415145] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.651 [2024-12-17 00:33:14.415152] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.651 [2024-12-17 00:33:14.415156] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415160] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48dc0) on tqpair=0xf0fac0 00:17:28.651 [2024-12-17 00:33:14.415238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.415269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.651 [2024-12-17 00:33:14.415287] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48dc0, cid 4, qid 0 00:17:28.651 [2024-12-17 00:33:14.415372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.651 [2024-12-17 00:33:14.415380] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.651 [2024-12-17 00:33:14.415384] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415388] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=4096, cccid=4 00:17:28.651 [2024-12-17 00:33:14.415393] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf48dc0) on tqpair(0xf0fac0): expected_datao=0, payload_size=4096 00:17:28.651 [2024-12-17 00:33:14.415397] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415405] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415409] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415418] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.651 [2024-12-17 00:33:14.415424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.651 [2024-12-17 00:33:14.415428] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415432] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48dc0) on tqpair=0xf0fac0 00:17:28.651 [2024-12-17 00:33:14.415448] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:28.651 [2024-12-17 00:33:14.415458] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415481] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.415489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.651 [2024-12-17 00:33:14.415509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48dc0, cid 4, qid 0 00:17:28.651 [2024-12-17 00:33:14.415581] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.651 [2024-12-17 00:33:14.415587] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.651 [2024-12-17 00:33:14.415591] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415595] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=4096, cccid=4 00:17:28.651 [2024-12-17 00:33:14.415600] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf48dc0) on tqpair(0xf0fac0): expected_datao=0, payload_size=4096 00:17:28.651 [2024-12-17 00:33:14.415605] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415612] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415616] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415624] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.651 [2024-12-17 00:33:14.415631] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.651 [2024-12-17 00:33:14.415635] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415639] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48dc0) on tqpair=0xf0fac0 00:17:28.651 [2024-12-17 00:33:14.415649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:28.651 [2024-12-17 00:33:14.415668] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415672] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0fac0) 00:17:28.651 [2024-12-17 00:33:14.415680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.651 [2024-12-17 00:33:14.415699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48dc0, cid 4, qid 0 00:17:28.651 [2024-12-17 00:33:14.415767] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.651 [2024-12-17 00:33:14.415773] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.651 [2024-12-17 00:33:14.415783] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415787] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=4096, cccid=4 00:17:28.651 [2024-12-17 00:33:14.415791] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf48dc0) on tqpair(0xf0fac0): expected_datao=0, payload_size=4096 00:17:28.651 [2024-12-17 00:33:14.415796] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.651 [2024-12-17 00:33:14.415803] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.415807] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.415815] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.652 [2024-12-17 00:33:14.415821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.652 [2024-12-17 00:33:14.415825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.415829] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48dc0) on tqpair=0xf0fac0 00:17:28.652 [2024-12-17 00:33:14.415841] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:28.652 [2024-12-17 00:33:14.415850] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:28.652 [2024-12-17 00:33:14.415861] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:28.652 [2024-12-17 00:33:14.415868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:28.652 [2024-12-17 00:33:14.415873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:28.652 [2024-12-17 00:33:14.415878] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:28.652 [2024-12-17 00:33:14.415883] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:28.652 [2024-12-17 00:33:14.415888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:28.652 [2024-12-17 00:33:14.415893] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:28.652 [2024-12-17 00:33:14.415907] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.415912] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.415919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.415926] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.415930] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.415934] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.415940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.652 [2024-12-17 00:33:14.415963] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48dc0, cid 4, qid 0 00:17:28.652 [2024-12-17 00:33:14.415971] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48f40, cid 5, qid 0 00:17:28.652 [2024-12-17 00:33:14.416027] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.652 [2024-12-17 00:33:14.416034] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.652 [2024-12-17 00:33:14.416037] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416041] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48dc0) on tqpair=0xf0fac0 00:17:28.652 [2024-12-17 00:33:14.416048] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.652 [2024-12-17 00:33:14.416054] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.652 [2024-12-17 00:33:14.416058] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416062] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48f40) on tqpair=0xf0fac0 00:17:28.652 [2024-12-17 00:33:14.416072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416076] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.416083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.416100] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48f40, cid 5, qid 0 00:17:28.652 [2024-12-17 00:33:14.416149] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.652 [2024-12-17 00:33:14.416155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.652 [2024-12-17 00:33:14.416159] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416163] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48f40) on tqpair=0xf0fac0 00:17:28.652 [2024-12-17 00:33:14.416173] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416178] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.416185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.416202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48f40, cid 5, qid 0 00:17:28.652 [2024-12-17 00:33:14.416258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.652 [2024-12-17 00:33:14.416264] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.652 [2024-12-17 00:33:14.416268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416272] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48f40) on tqpair=0xf0fac0 00:17:28.652 [2024-12-17 00:33:14.416283] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416287] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.416294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.416310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48f40, cid 5, qid 0 00:17:28.652 [2024-12-17 00:33:14.416374] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.652 [2024-12-17 00:33:14.416383] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.652 [2024-12-17 00:33:14.416386] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416391] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48f40) on tqpair=0xf0fac0 00:17:28.652 [2024-12-17 00:33:14.416407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416413] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.416420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.416428] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.416438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.416445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.416455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.416462] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf0fac0) 00:17:28.652 [2024-12-17 00:33:14.416499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.652 [2024-12-17 00:33:14.416523] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48f40, cid 5, qid 0 00:17:28.652 [2024-12-17 00:33:14.416531] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48dc0, cid 4, qid 0 00:17:28.652 [2024-12-17 00:33:14.416536] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf490c0, cid 6, qid 0 00:17:28.652 [2024-12-17 00:33:14.416541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49240, cid 7, qid 0 00:17:28.652 [2024-12-17 00:33:14.416679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.652 [2024-12-17 00:33:14.416686] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.652 [2024-12-17 00:33:14.416690] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416694] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=8192, cccid=5 00:17:28.652 [2024-12-17 00:33:14.416700] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf48f40) on tqpair(0xf0fac0): expected_datao=0, payload_size=8192 00:17:28.652 [2024-12-17 00:33:14.416704] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416721] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416727] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.652 [2024-12-17 00:33:14.416739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.652 [2024-12-17 00:33:14.416743] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=512, cccid=4 00:17:28.652 [2024-12-17 00:33:14.416753] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf48dc0) on tqpair(0xf0fac0): expected_datao=0, payload_size=512 00:17:28.652 [2024-12-17 00:33:14.416757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416764] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416768] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.652 [2024-12-17 00:33:14.416780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.652 [2024-12-17 00:33:14.416784] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416788] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=512, cccid=6 00:17:28.652 [2024-12-17 00:33:14.416793] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf490c0) on tqpair(0xf0fac0): expected_datao=0, payload_size=512 00:17:28.652 [2024-12-17 00:33:14.416798] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416804] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416808] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416829] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.652 [2024-12-17 00:33:14.416835] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.652 [2024-12-17 00:33:14.416853] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416856] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0fac0): datao=0, datal=4096, cccid=7 00:17:28.652 [2024-12-17 00:33:14.416861] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf49240) on tqpair(0xf0fac0): expected_datao=0, payload_size=4096 00:17:28.652 [2024-12-17 00:33:14.416865] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416871] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416875] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.652 [2024-12-17 00:33:14.416883] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.653 [2024-12-17 00:33:14.416889] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.653 [2024-12-17 00:33:14.416893] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.653 [2024-12-17 00:33:14.416897] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48f40) on tqpair=0xf0fac0 00:17:28.653 [2024-12-17 00:33:14.416911] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.653 ===================================================== 00:17:28.653 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.653 ===================================================== 00:17:28.653 Controller Capabilities/Features 00:17:28.653 ================================ 00:17:28.653 Vendor ID: 8086 00:17:28.653 Subsystem Vendor ID: 8086 00:17:28.653 Serial Number: SPDK00000000000001 00:17:28.653 Model Number: SPDK bdev Controller 00:17:28.653 Firmware Version: 24.09.1 00:17:28.653 Recommended Arb Burst: 6 00:17:28.653 IEEE OUI Identifier: e4 d2 5c 00:17:28.653 Multi-path I/O 00:17:28.653 May have multiple subsystem ports: Yes 00:17:28.653 May have multiple controllers: Yes 00:17:28.653 Associated with SR-IOV VF: No 00:17:28.653 Max Data Transfer Size: 131072 00:17:28.653 Max Number of Namespaces: 32 00:17:28.653 Max Number of I/O Queues: 127 00:17:28.653 NVMe Specification Version (VS): 1.3 00:17:28.653 NVMe Specification Version (Identify): 1.3 00:17:28.653 Maximum Queue Entries: 128 00:17:28.653 Contiguous Queues Required: Yes 00:17:28.653 Arbitration Mechanisms Supported 00:17:28.653 Weighted Round Robin: Not Supported 00:17:28.653 Vendor Specific: Not Supported 00:17:28.653 Reset Timeout: 15000 ms 00:17:28.653 Doorbell Stride: 4 bytes 00:17:28.653 NVM Subsystem Reset: Not Supported 00:17:28.653 Command Sets Supported 00:17:28.653 NVM Command Set: Supported 00:17:28.653 Boot Partition: Not Supported 00:17:28.653 Memory Page Size Minimum: 4096 bytes 00:17:28.653 Memory Page Size Maximum: 4096 bytes 00:17:28.653 Persistent Memory Region: Not Supported 00:17:28.653 Optional Asynchronous Events Supported 00:17:28.653 Namespace Attribute Notices: Supported 00:17:28.653 Firmware Activation Notices: Not Supported 00:17:28.653 ANA Change Notices: Not Supported 00:17:28.653 PLE Aggregate Log Change Notices: Not Supported 00:17:28.653 LBA Status Info Alert Notices: Not Supported 00:17:28.653 EGE Aggregate Log Change Notices: Not Supported 00:17:28.653 Normal NVM Subsystem Shutdown event: Not Supported 00:17:28.653 Zone Descriptor Change Notices: Not Supported 00:17:28.653 Discovery Log Change Notices: Not Supported 00:17:28.653 Controller Attributes 00:17:28.653 128-bit Host Identifier: Supported 00:17:28.653 Non-Operational Permissive Mode: Not Supported 00:17:28.653 NVM Sets: Not Supported 00:17:28.653 Read Recovery Levels: Not Supported 00:17:28.653 Endurance Groups: Not Supported 00:17:28.653 Predictable Latency Mode: Not Supported 00:17:28.653 Traffic Based Keep ALive: Not Supported 00:17:28.653 Namespace Granularity: Not Supported 00:17:28.653 SQ Associations: Not Supported 00:17:28.653 UUID List: Not Supported 00:17:28.653 Multi-Domain Subsystem: Not Supported 00:17:28.653 Fixed Capacity Management: Not Supported 00:17:28.653 Variable Capacity Management: Not Supported 00:17:28.653 Delete Endurance Group: Not Supported 00:17:28.653 Delete NVM Set: Not Supported 00:17:28.653 Extended LBA Formats Supported: Not Supported 00:17:28.653 Flexible Data Placement Supported: Not Supported 00:17:28.653 00:17:28.653 Controller Memory Buffer Support 00:17:28.653 ================================ 00:17:28.653 Supported: No 00:17:28.653 00:17:28.653 Persistent Memory Region Support 00:17:28.653 ================================ 00:17:28.653 Supported: No 00:17:28.653 00:17:28.653 Admin Command Set Attributes 00:17:28.653 ============================ 00:17:28.653 Security Send/Receive: Not Supported 00:17:28.653 Format NVM: Not Supported 00:17:28.653 Firmware Activate/Download: Not Supported 00:17:28.653 Namespace Management: Not Supported 00:17:28.653 Device Self-Test: Not Supported 00:17:28.653 Directives: Not Supported 00:17:28.653 NVMe-MI: Not Supported 00:17:28.653 Virtualization Management: Not Supported 00:17:28.653 Doorbell Buffer Config: Not Supported 00:17:28.653 Get LBA Status Capability: Not Supported 00:17:28.653 Command & Feature Lockdown Capability: Not Supported 00:17:28.653 Abort Command Limit: 4 00:17:28.653 Async Event Request Limit: 4 00:17:28.653 Number of Firmware Slots: N/A 00:17:28.653 Firmware Slot 1 Read-Only: N/A 00:17:28.653 Firmware Activation Without Reset: [2024-12-17 00:33:14.416918] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.653 [2024-12-17 00:33:14.416921] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.653 [2024-12-17 00:33:14.416925] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48dc0) on tqpair=0xf0fac0 00:17:28.653 [2024-12-17 00:33:14.416937] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.653 [2024-12-17 00:33:14.416943] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.653 [2024-12-17 00:33:14.416946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.653 [2024-12-17 00:33:14.416950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf490c0) on tqpair=0xf0fac0 00:17:28.653 [2024-12-17 00:33:14.416957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.653 [2024-12-17 00:33:14.416963] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.653 [2024-12-17 00:33:14.416967] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.653 [2024-12-17 00:33:14.416971] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf49240) on tqpair=0xf0fac0 00:17:28.653 N/A 00:17:28.653 Multiple Update Detection Support: N/A 00:17:28.653 Firmware Update Granularity: No Information Provided 00:17:28.653 Per-Namespace SMART Log: No 00:17:28.653 Asymmetric Namespace Access Log Page: Not Supported 00:17:28.653 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:28.653 Command Effects Log Page: Supported 00:17:28.653 Get Log Page Extended Data: Supported 00:17:28.653 Telemetry Log Pages: Not Supported 00:17:28.653 Persistent Event Log Pages: Not Supported 00:17:28.653 Supported Log Pages Log Page: May Support 00:17:28.653 Commands Supported & Effects Log Page: Not Supported 00:17:28.653 Feature Identifiers & Effects Log Page:May Support 00:17:28.653 NVMe-MI Commands & Effects Log Page: May Support 00:17:28.653 Data Area 4 for Telemetry Log: Not Supported 00:17:28.653 Error Log Page Entries Supported: 128 00:17:28.653 Keep Alive: Supported 00:17:28.653 Keep Alive Granularity: 10000 ms 00:17:28.653 00:17:28.653 NVM Command Set Attributes 00:17:28.653 ========================== 00:17:28.653 Submission Queue Entry Size 00:17:28.653 Max: 64 00:17:28.653 Min: 64 00:17:28.653 Completion Queue Entry Size 00:17:28.653 Max: 16 00:17:28.653 Min: 16 00:17:28.653 Number of Namespaces: 32 00:17:28.653 Compare Command: Supported 00:17:28.653 Write Uncorrectable Command: Not Supported 00:17:28.653 Dataset Management Command: Supported 00:17:28.653 Write Zeroes Command: Supported 00:17:28.653 Set Features Save Field: Not Supported 00:17:28.653 Reservations: Supported 00:17:28.653 Timestamp: Not Supported 00:17:28.653 Copy: Supported 00:17:28.653 Volatile Write Cache: Present 00:17:28.653 Atomic Write Unit (Normal): 1 00:17:28.653 Atomic Write Unit (PFail): 1 00:17:28.653 Atomic Compare & Write Unit: 1 00:17:28.653 Fused Compare & Write: Supported 00:17:28.653 Scatter-Gather List 00:17:28.653 SGL Command Set: Supported 00:17:28.653 SGL Keyed: Supported 00:17:28.653 SGL Bit Bucket Descriptor: Not Supported 00:17:28.653 SGL Metadata Pointer: Not Supported 00:17:28.653 Oversized SGL: Not Supported 00:17:28.653 SGL Metadata Address: Not Supported 00:17:28.653 SGL Offset: Supported 00:17:28.653 Transport SGL Data Block: Not Supported 00:17:28.653 Replay Protected Memory Block: Not Supported 00:17:28.653 00:17:28.653 Firmware Slot Information 00:17:28.653 ========================= 00:17:28.653 Active slot: 1 00:17:28.653 Slot 1 Firmware Revision: 24.09.1 00:17:28.653 00:17:28.653 00:17:28.653 Commands Supported and Effects 00:17:28.653 ============================== 00:17:28.653 Admin Commands 00:17:28.653 -------------- 00:17:28.653 Get Log Page (02h): Supported 00:17:28.653 Identify (06h): Supported 00:17:28.653 Abort (08h): Supported 00:17:28.653 Set Features (09h): Supported 00:17:28.653 Get Features (0Ah): Supported 00:17:28.653 Asynchronous Event Request (0Ch): Supported 00:17:28.653 Keep Alive (18h): Supported 00:17:28.653 I/O Commands 00:17:28.653 ------------ 00:17:28.653 Flush (00h): Supported LBA-Change 00:17:28.653 Write (01h): Supported LBA-Change 00:17:28.653 Read (02h): Supported 00:17:28.653 Compare (05h): Supported 00:17:28.653 Write Zeroes (08h): Supported LBA-Change 00:17:28.653 Dataset Management (09h): Supported LBA-Change 00:17:28.653 Copy (19h): Supported LBA-Change 00:17:28.653 00:17:28.653 Error Log 00:17:28.653 ========= 00:17:28.653 00:17:28.653 Arbitration 00:17:28.653 =========== 00:17:28.653 Arbitration Burst: 1 00:17:28.653 00:17:28.653 Power Management 00:17:28.653 ================ 00:17:28.653 Number of Power States: 1 00:17:28.653 Current Power State: Power State #0 00:17:28.653 Power State #0: 00:17:28.653 Max Power: 0.00 W 00:17:28.653 Non-Operational State: Operational 00:17:28.653 Entry Latency: Not Reported 00:17:28.654 Exit Latency: Not Reported 00:17:28.654 Relative Read Throughput: 0 00:17:28.654 Relative Read Latency: 0 00:17:28.654 Relative Write Throughput: 0 00:17:28.654 Relative Write Latency: 0 00:17:28.654 Idle Power: Not Reported 00:17:28.654 Active Power: Not Reported 00:17:28.654 Non-Operational Permissive Mode: Not Supported 00:17:28.654 00:17:28.654 Health Information 00:17:28.654 ================== 00:17:28.654 Critical Warnings: 00:17:28.654 Available Spare Space: OK 00:17:28.654 Temperature: OK 00:17:28.654 Device Reliability: OK 00:17:28.654 Read Only: No 00:17:28.654 Volatile Memory Backup: OK 00:17:28.654 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:28.654 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:28.654 Available Spare: 0% 00:17:28.654 Available Spare Threshold: 0% 00:17:28.654 Life Percentage U[2024-12-17 00:33:14.417065] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.417072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.417079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.417101] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf49240, cid 7, qid 0 00:17:28.654 [2024-12-17 00:33:14.417152] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.417159] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.417162] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.417166] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf49240) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.417201] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:28.654 [2024-12-17 00:33:14.417212] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf487c0) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.417219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.654 [2024-12-17 00:33:14.417225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48940) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.417229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.654 [2024-12-17 00:33:14.417235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48ac0) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.417239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.654 [2024-12-17 00:33:14.417245] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.417249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.654 [2024-12-17 00:33:14.417258] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.417262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.417266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.417274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.417295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.654 [2024-12-17 00:33:14.417342] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.417348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.421429] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421437] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.421446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421451] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421454] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.421462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.421491] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.654 [2024-12-17 00:33:14.421548] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.421554] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.421558] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.421566] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:28.654 [2024-12-17 00:33:14.421571] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:28.654 [2024-12-17 00:33:14.421580] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421588] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.421595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.421645] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.654 [2024-12-17 00:33:14.421688] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.421695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.421699] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421703] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.421713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421718] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421722] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.421729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.421746] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.654 [2024-12-17 00:33:14.421803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.421810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.421814] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.421829] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421834] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421838] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.421845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.421862] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.654 [2024-12-17 00:33:14.421906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.421913] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.421916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421921] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.421931] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421936] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.421940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.421947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.421964] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.654 [2024-12-17 00:33:14.422006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.422012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.422016] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.422020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.654 [2024-12-17 00:33:14.422031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.422035] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.654 [2024-12-17 00:33:14.422039] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.654 [2024-12-17 00:33:14.422046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.654 [2024-12-17 00:33:14.422063] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.654 [2024-12-17 00:33:14.422124] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.654 [2024-12-17 00:33:14.422136] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.654 [2024-12-17 00:33:14.422140] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422144] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422155] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422160] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422188] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422228] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422239] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422243] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422253] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422262] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422285] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422328] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422336] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422340] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422345] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422355] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422360] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422390] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422440] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422444] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422448] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422458] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422463] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422467] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422491] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422531] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422538] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422541] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422545] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422587] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422637] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422641] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422645] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422655] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422659] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422732] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422738] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422746] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422756] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422764] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422788] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422837] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422840] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422844] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422859] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422863] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422887] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.422929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.422936] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.422939] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422943] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.422954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422958] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.422962] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.422969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.422986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.423025] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.423036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.423040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423045] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.423055] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423060] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.423071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.423088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.423129] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.423135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.423139] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423143] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.423153] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.423168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.423185] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.423227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.423234] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.423238] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.655 [2024-12-17 00:33:14.423252] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423260] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.655 [2024-12-17 00:33:14.423267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.655 [2024-12-17 00:33:14.423283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.655 [2024-12-17 00:33:14.423352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.655 [2024-12-17 00:33:14.423360] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.655 [2024-12-17 00:33:14.423364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.655 [2024-12-17 00:33:14.423368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.423379] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423388] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.423395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.423414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.423462] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.423473] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.423478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.423493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423498] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.423509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.423527] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.423576] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.423587] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.423591] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423595] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.423606] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423611] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423615] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.423623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.423640] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.423684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.423691] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.423695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.423724] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423733] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.423740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.423756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.423819] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.423835] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.423840] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423844] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.423856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423864] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.423872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.423890] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.423938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.423949] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.423953] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423958] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.423969] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423974] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.423977] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.423985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.424002] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.424050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.424057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.424060] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.424075] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424080] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.424091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.424122] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.424165] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.424172] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.424175] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424179] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.424189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424194] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424198] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.424205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.424221] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.424267] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.424274] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.424277] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424281] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.424291] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.424307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.424335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.424382] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.424389] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.424392] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424396] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.424407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424411] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424415] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.424423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.424440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.424512] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.424521] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.424525] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424529] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.424540] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424545] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424549] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.424557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.424576] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.424619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.424626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.424630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.656 [2024-12-17 00:33:14.424645] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424650] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.656 [2024-12-17 00:33:14.424662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.656 [2024-12-17 00:33:14.424679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.656 [2024-12-17 00:33:14.424728] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.656 [2024-12-17 00:33:14.424735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.656 [2024-12-17 00:33:14.424739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.656 [2024-12-17 00:33:14.424743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.424754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.424759] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.424763] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.657 [2024-12-17 00:33:14.424770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.657 [2024-12-17 00:33:14.424788] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.657 [2024-12-17 00:33:14.424865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.657 [2024-12-17 00:33:14.424871] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.657 [2024-12-17 00:33:14.424875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.424879] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.424889] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.424894] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.424897] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.657 [2024-12-17 00:33:14.424905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.657 [2024-12-17 00:33:14.424921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.657 [2024-12-17 00:33:14.424969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.657 [2024-12-17 00:33:14.424975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.657 [2024-12-17 00:33:14.424979] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.424983] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.424993] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.424998] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425002] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.657 [2024-12-17 00:33:14.425009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.657 [2024-12-17 00:33:14.425025] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.657 [2024-12-17 00:33:14.425071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.657 [2024-12-17 00:33:14.425078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.657 [2024-12-17 00:33:14.425081] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425085] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.425095] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425103] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.657 [2024-12-17 00:33:14.425111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.657 [2024-12-17 00:33:14.425127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.657 [2024-12-17 00:33:14.425170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.657 [2024-12-17 00:33:14.425177] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.657 [2024-12-17 00:33:14.425180] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425184] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.425194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425203] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.657 [2024-12-17 00:33:14.425210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.657 [2024-12-17 00:33:14.425226] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.657 [2024-12-17 00:33:14.425270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.657 [2024-12-17 00:33:14.425276] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.657 [2024-12-17 00:33:14.425280] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425284] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.425294] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425299] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.425302] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.657 [2024-12-17 00:33:14.425310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.657 [2024-12-17 00:33:14.425326] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.657 [2024-12-17 00:33:14.429368] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.657 [2024-12-17 00:33:14.429388] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.657 [2024-12-17 00:33:14.429393] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.429398] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.429412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.429417] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.429421] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0fac0) 00:17:28.657 [2024-12-17 00:33:14.429429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.657 [2024-12-17 00:33:14.429453] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf48c40, cid 3, qid 0 00:17:28.657 [2024-12-17 00:33:14.429502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.657 [2024-12-17 00:33:14.429509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.657 [2024-12-17 00:33:14.429513] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.657 [2024-12-17 00:33:14.429517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf48c40) on tqpair=0xf0fac0 00:17:28.657 [2024-12-17 00:33:14.429525] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:28.657 sed: 0% 00:17:28.657 Data Units Read: 0 00:17:28.657 Data Units Written: 0 00:17:28.657 Host Read Commands: 0 00:17:28.657 Host Write Commands: 0 00:17:28.657 Controller Busy Time: 0 minutes 00:17:28.657 Power Cycles: 0 00:17:28.657 Power On Hours: 0 hours 00:17:28.657 Unsafe Shutdowns: 0 00:17:28.657 Unrecoverable Media Errors: 0 00:17:28.657 Lifetime Error Log Entries: 0 00:17:28.657 Warning Temperature Time: 0 minutes 00:17:28.657 Critical Temperature Time: 0 minutes 00:17:28.657 00:17:28.657 Number of Queues 00:17:28.657 ================ 00:17:28.657 Number of I/O Submission Queues: 127 00:17:28.657 Number of I/O Completion Queues: 127 00:17:28.657 00:17:28.657 Active Namespaces 00:17:28.657 ================= 00:17:28.657 Namespace ID:1 00:17:28.657 Error Recovery Timeout: Unlimited 00:17:28.657 Command Set Identifier: NVM (00h) 00:17:28.657 Deallocate: Supported 00:17:28.657 Deallocated/Unwritten Error: Not Supported 00:17:28.657 Deallocated Read Value: Unknown 00:17:28.657 Deallocate in Write Zeroes: Not Supported 00:17:28.657 Deallocated Guard Field: 0xFFFF 00:17:28.657 Flush: Supported 00:17:28.657 Reservation: Supported 00:17:28.657 Namespace Sharing Capabilities: Multiple Controllers 00:17:28.657 Size (in LBAs): 131072 (0GiB) 00:17:28.657 Capacity (in LBAs): 131072 (0GiB) 00:17:28.657 Utilization (in LBAs): 131072 (0GiB) 00:17:28.657 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:28.657 EUI64: ABCDEF0123456789 00:17:28.657 UUID: 2617193c-2855-49a5-838e-fc3fb781b916 00:17:28.657 Thin Provisioning: Not Supported 00:17:28.657 Per-NS Atomic Units: Yes 00:17:28.657 Atomic Boundary Size (Normal): 0 00:17:28.657 Atomic Boundary Size (PFail): 0 00:17:28.657 Atomic Boundary Offset: 0 00:17:28.657 Maximum Single Source Range Length: 65535 00:17:28.657 Maximum Copy Length: 65535 00:17:28.657 Maximum Source Range Count: 1 00:17:28.657 NGUID/EUI64 Never Reused: No 00:17:28.657 Namespace Write Protected: No 00:17:28.657 Number of LBA Formats: 1 00:17:28.657 Current LBA Format: LBA Format #00 00:17:28.657 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:28.657 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.657 rmmod nvme_tcp 00:17:28.657 rmmod nvme_fabrics 00:17:28.657 rmmod nvme_keyring 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 87855 ']' 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 87855 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 87855 ']' 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 87855 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:17:28.657 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.658 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87855 00:17:28.658 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.658 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.658 killing process with pid 87855 00:17:28.658 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87855' 00:17:28.658 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 87855 00:17:28.658 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 87855 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:28.917 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:29.176 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.176 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.176 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:29.176 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.176 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.176 00:33:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.176 00:33:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:29.176 00:17:29.176 real 0m2.745s 00:17:29.176 user 0m6.894s 00:17:29.176 sys 0m0.693s 00:17:29.176 00:33:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.176 00:33:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:29.176 ************************************ 00:17:29.176 END TEST nvmf_identify 00:17:29.176 ************************************ 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.177 ************************************ 00:17:29.177 START TEST nvmf_perf 00:17:29.177 ************************************ 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:29.177 * Looking for test storage... 00:17:29.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:29.177 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.436 --rc genhtml_branch_coverage=1 00:17:29.436 --rc genhtml_function_coverage=1 00:17:29.436 --rc genhtml_legend=1 00:17:29.436 --rc geninfo_all_blocks=1 00:17:29.436 --rc geninfo_unexecuted_blocks=1 00:17:29.436 00:17:29.436 ' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.436 --rc genhtml_branch_coverage=1 00:17:29.436 --rc genhtml_function_coverage=1 00:17:29.436 --rc genhtml_legend=1 00:17:29.436 --rc geninfo_all_blocks=1 00:17:29.436 --rc geninfo_unexecuted_blocks=1 00:17:29.436 00:17:29.436 ' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.436 --rc genhtml_branch_coverage=1 00:17:29.436 --rc genhtml_function_coverage=1 00:17:29.436 --rc genhtml_legend=1 00:17:29.436 --rc geninfo_all_blocks=1 00:17:29.436 --rc geninfo_unexecuted_blocks=1 00:17:29.436 00:17:29.436 ' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.436 --rc genhtml_branch_coverage=1 00:17:29.436 --rc genhtml_function_coverage=1 00:17:29.436 --rc genhtml_legend=1 00:17:29.436 --rc geninfo_all_blocks=1 00:17:29.436 --rc geninfo_unexecuted_blocks=1 00:17:29.436 00:17:29.436 ' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:29.436 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:29.437 Cannot find device "nvmf_init_br" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:29.437 Cannot find device "nvmf_init_br2" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:29.437 Cannot find device "nvmf_tgt_br" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.437 Cannot find device "nvmf_tgt_br2" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:29.437 Cannot find device "nvmf_init_br" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:29.437 Cannot find device "nvmf_init_br2" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:29.437 Cannot find device "nvmf_tgt_br" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:29.437 Cannot find device "nvmf_tgt_br2" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:29.437 Cannot find device "nvmf_br" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:29.437 Cannot find device "nvmf_init_if" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:29.437 Cannot find device "nvmf_init_if2" 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.437 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:29.437 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:29.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:29.696 00:17:29.696 --- 10.0.0.3 ping statistics --- 00:17:29.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.696 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:29.696 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:29.696 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:29.696 00:17:29.696 --- 10.0.0.4 ping statistics --- 00:17:29.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.696 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:29.696 00:17:29.696 --- 10.0.0.1 ping statistics --- 00:17:29.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.696 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:29.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:29.696 00:17:29.696 --- 10.0.0.2 ping statistics --- 00:17:29.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.696 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=88114 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 88114 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 88114 ']' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.696 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:29.954 [2024-12-17 00:33:15.707349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:29.954 [2024-12-17 00:33:15.707410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.954 [2024-12-17 00:33:15.842121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:29.954 [2024-12-17 00:33:15.882949] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.954 [2024-12-17 00:33:15.883018] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.954 [2024-12-17 00:33:15.883032] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.954 [2024-12-17 00:33:15.883041] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.954 [2024-12-17 00:33:15.883050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.954 [2024-12-17 00:33:15.883222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.954 [2024-12-17 00:33:15.883379] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.954 [2024-12-17 00:33:15.883707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.954 [2024-12-17 00:33:15.883785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.954 [2024-12-17 00:33:15.916502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.214 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.214 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:17:30.214 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:30.214 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.214 00:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:30.214 00:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.214 00:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:30.214 00:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:30.473 00:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:30.473 00:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:30.732 00:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:30.732 00:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.299 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:31.299 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:31.299 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:31.299 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:31.299 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:31.299 [2024-12-17 00:33:17.290139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.558 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.817 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:31.817 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:32.076 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:32.076 00:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:32.335 00:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:32.335 [2024-12-17 00:33:18.311356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.335 00:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:32.593 00:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:32.593 00:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:32.593 00:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:32.593 00:33:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:33.970 Initializing NVMe Controllers 00:17:33.970 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:33.970 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:33.970 Initialization complete. Launching workers. 00:17:33.970 ======================================================== 00:17:33.970 Latency(us) 00:17:33.970 Device Information : IOPS MiB/s Average min max 00:17:33.970 PCIE (0000:00:10.0) NSID 1 from core 0: 22177.98 86.63 1442.26 258.94 8039.39 00:17:33.970 ======================================================== 00:17:33.970 Total : 22177.98 86.63 1442.26 258.94 8039.39 00:17:33.970 00:17:33.970 00:33:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:35.347 Initializing NVMe Controllers 00:17:35.347 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:35.347 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:35.347 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:35.347 Initialization complete. Launching workers. 00:17:35.347 ======================================================== 00:17:35.347 Latency(us) 00:17:35.347 Device Information : IOPS MiB/s Average min max 00:17:35.347 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3835.11 14.98 260.42 97.80 7208.77 00:17:35.347 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.75 0.48 8080.90 4919.21 12059.93 00:17:35.347 ======================================================== 00:17:35.347 Total : 3958.86 15.46 504.87 97.80 12059.93 00:17:35.347 00:17:35.347 00:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:36.725 Initializing NVMe Controllers 00:17:36.725 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.725 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:36.725 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:36.725 Initialization complete. Launching workers. 00:17:36.725 ======================================================== 00:17:36.725 Latency(us) 00:17:36.725 Device Information : IOPS MiB/s Average min max 00:17:36.725 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9264.48 36.19 3454.99 528.53 7161.92 00:17:36.725 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3989.98 15.59 8031.44 6622.12 12545.16 00:17:36.725 ======================================================== 00:17:36.725 Total : 13254.46 51.78 4832.63 528.53 12545.16 00:17:36.725 00:17:36.725 00:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:36.725 00:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:39.260 Initializing NVMe Controllers 00:17:39.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.260 Controller IO queue size 128, less than required. 00:17:39.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:39.260 Controller IO queue size 128, less than required. 00:17:39.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:39.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:39.260 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:39.260 Initialization complete. Launching workers. 00:17:39.260 ======================================================== 00:17:39.260 Latency(us) 00:17:39.260 Device Information : IOPS MiB/s Average min max 00:17:39.260 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1944.69 486.17 66771.70 37705.09 108425.41 00:17:39.260 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 661.39 165.35 203571.67 31092.69 344120.90 00:17:39.260 ======================================================== 00:17:39.260 Total : 2606.08 651.52 101489.98 31092.69 344120.90 00:17:39.260 00:17:39.260 00:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:39.260 Initializing NVMe Controllers 00:17:39.260 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.260 Controller IO queue size 128, less than required. 00:17:39.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:39.260 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:39.260 Controller IO queue size 128, less than required. 00:17:39.260 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:39.260 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:39.260 WARNING: Some requested NVMe devices were skipped 00:17:39.260 No valid NVMe controllers or AIO or URING devices found 00:17:39.260 00:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:41.791 Initializing NVMe Controllers 00:17:41.791 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.791 Controller IO queue size 128, less than required. 00:17:41.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.791 Controller IO queue size 128, less than required. 00:17:41.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.791 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:41.791 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:41.791 Initialization complete. Launching workers. 00:17:41.791 00:17:41.791 ==================== 00:17:41.791 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:41.791 TCP transport: 00:17:41.791 polls: 10046 00:17:41.791 idle_polls: 5933 00:17:41.791 sock_completions: 4113 00:17:41.791 nvme_completions: 6913 00:17:41.791 submitted_requests: 10346 00:17:41.791 queued_requests: 1 00:17:41.791 00:17:41.791 ==================== 00:17:41.791 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:41.791 TCP transport: 00:17:41.791 polls: 10170 00:17:41.791 idle_polls: 5879 00:17:41.791 sock_completions: 4291 00:17:41.791 nvme_completions: 6933 00:17:41.791 submitted_requests: 10330 00:17:41.791 queued_requests: 1 00:17:41.791 ======================================================== 00:17:41.791 Latency(us) 00:17:41.791 Device Information : IOPS MiB/s Average min max 00:17:41.791 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1726.29 431.57 75744.52 38801.36 115997.14 00:17:41.791 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1731.29 432.82 74365.85 21670.83 116397.16 00:17:41.791 ======================================================== 00:17:41.791 Total : 3457.58 864.39 75054.19 21670.83 116397.16 00:17:41.791 00:17:41.791 00:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:41.791 00:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.050 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:17:42.050 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:17:42.050 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:17:42.617 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=06f3eb87-0e20-44b3-a0b4-6478cae6187d 00:17:42.617 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 06f3eb87-0e20-44b3-a0b4-6478cae6187d 00:17:42.617 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=06f3eb87-0e20-44b3-a0b4-6478cae6187d 00:17:42.617 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:17:42.617 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:17:42.617 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:17:42.617 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:42.876 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:17:42.876 { 00:17:42.876 "uuid": "06f3eb87-0e20-44b3-a0b4-6478cae6187d", 00:17:42.876 "name": "lvs_0", 00:17:42.876 "base_bdev": "Nvme0n1", 00:17:42.876 "total_data_clusters": 1278, 00:17:42.876 "free_clusters": 1278, 00:17:42.876 "block_size": 4096, 00:17:42.876 "cluster_size": 4194304 00:17:42.876 } 00:17:42.876 ]' 00:17:42.876 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="06f3eb87-0e20-44b3-a0b4-6478cae6187d") .free_clusters' 00:17:42.876 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:17:42.876 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="06f3eb87-0e20-44b3-a0b4-6478cae6187d") .cluster_size' 00:17:42.876 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:17:42.877 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:17:42.877 5112 00:17:42.877 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:17:42.877 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:17:42.877 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 06f3eb87-0e20-44b3-a0b4-6478cae6187d lbd_0 5112 00:17:43.136 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=258d47c7-2e75-44e3-acac-de318885e3da 00:17:43.136 00:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 258d47c7-2e75-44e3-acac-de318885e3da lvs_n_0 00:17:43.394 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=f315e878-4f99-425a-8dc3-b4ea4e555222 00:17:43.394 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb f315e878-4f99-425a-8dc3-b4ea4e555222 00:17:43.394 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=f315e878-4f99-425a-8dc3-b4ea4e555222 00:17:43.394 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:17:43.394 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:17:43.394 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:17:43.394 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:17:43.653 { 00:17:43.653 "uuid": "06f3eb87-0e20-44b3-a0b4-6478cae6187d", 00:17:43.653 "name": "lvs_0", 00:17:43.653 "base_bdev": "Nvme0n1", 00:17:43.653 "total_data_clusters": 1278, 00:17:43.653 "free_clusters": 0, 00:17:43.653 "block_size": 4096, 00:17:43.653 "cluster_size": 4194304 00:17:43.653 }, 00:17:43.653 { 00:17:43.653 "uuid": "f315e878-4f99-425a-8dc3-b4ea4e555222", 00:17:43.653 "name": "lvs_n_0", 00:17:43.653 "base_bdev": "258d47c7-2e75-44e3-acac-de318885e3da", 00:17:43.653 "total_data_clusters": 1276, 00:17:43.653 "free_clusters": 1276, 00:17:43.653 "block_size": 4096, 00:17:43.653 "cluster_size": 4194304 00:17:43.653 } 00:17:43.653 ]' 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f315e878-4f99-425a-8dc3-b4ea4e555222") .free_clusters' 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f315e878-4f99-425a-8dc3-b4ea4e555222") .cluster_size' 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:17:43.653 5104 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:17:43.653 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f315e878-4f99-425a-8dc3-b4ea4e555222 lbd_nest_0 5104 00:17:43.912 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=38ff8915-b704-40a4-9566-a03dcf422c36 00:17:43.912 00:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.509 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:17:44.509 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 38ff8915-b704-40a4-9566-a03dcf422c36 00:17:44.509 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:44.775 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:17:44.775 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:17:44.775 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:44.775 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:44.775 00:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:45.034 Initializing NVMe Controllers 00:17:45.034 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.034 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:45.034 WARNING: Some requested NVMe devices were skipped 00:17:45.034 No valid NVMe controllers or AIO or URING devices found 00:17:45.293 00:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:45.293 00:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:55.269 Initializing NVMe Controllers 00:17:55.269 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.269 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:55.269 Initialization complete. Launching workers. 00:17:55.269 ======================================================== 00:17:55.269 Latency(us) 00:17:55.269 Device Information : IOPS MiB/s Average min max 00:17:55.269 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 947.91 118.49 1054.51 330.37 7796.53 00:17:55.269 ======================================================== 00:17:55.269 Total : 947.91 118.49 1054.51 330.37 7796.53 00:17:55.269 00:17:55.527 00:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:55.527 00:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:55.527 00:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:55.786 Initializing NVMe Controllers 00:17:55.786 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.786 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:55.786 WARNING: Some requested NVMe devices were skipped 00:17:55.786 No valid NVMe controllers or AIO or URING devices found 00:17:55.786 00:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:55.786 00:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:07.996 Initializing NVMe Controllers 00:18:07.996 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.996 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:07.996 Initialization complete. Launching workers. 00:18:07.996 ======================================================== 00:18:07.996 Latency(us) 00:18:07.996 Device Information : IOPS MiB/s Average min max 00:18:07.996 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1318.00 164.75 24302.25 5210.99 60175.58 00:18:07.996 ======================================================== 00:18:07.996 Total : 1318.00 164.75 24302.25 5210.99 60175.58 00:18:07.996 00:18:07.996 00:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:07.996 00:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:07.996 00:33:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:07.996 Initializing NVMe Controllers 00:18:07.996 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:07.996 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:07.996 WARNING: Some requested NVMe devices were skipped 00:18:07.996 No valid NVMe controllers or AIO or URING devices found 00:18:07.996 00:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:07.996 00:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:17.974 Initializing NVMe Controllers 00:18:17.974 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.974 Controller IO queue size 128, less than required. 00:18:17.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:17.974 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.974 Initialization complete. Launching workers. 00:18:17.974 ======================================================== 00:18:17.974 Latency(us) 00:18:17.974 Device Information : IOPS MiB/s Average min max 00:18:17.974 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4129.20 516.15 31066.03 10231.27 65990.77 00:18:17.974 ======================================================== 00:18:17.974 Total : 4129.20 516.15 31066.03 10231.27 65990.77 00:18:17.974 00:18:17.974 00:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.974 00:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 38ff8915-b704-40a4-9566-a03dcf422c36 00:18:17.974 00:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:17.974 00:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 258d47c7-2e75-44e3-acac-de318885e3da 00:18:17.974 00:34:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.233 rmmod nvme_tcp 00:18:18.233 rmmod nvme_fabrics 00:18:18.233 rmmod nvme_keyring 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.233 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 88114 ']' 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 88114 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 88114 ']' 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 88114 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88114 00:18:18.234 killing process with pid 88114 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88114' 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 88114 00:18:18.234 00:34:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 88114 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.614 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:19.615 00:18:19.615 real 0m50.431s 00:18:19.615 user 3m9.495s 00:18:19.615 sys 0m12.278s 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:19.615 ************************************ 00:18:19.615 END TEST nvmf_perf 00:18:19.615 ************************************ 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.615 ************************************ 00:18:19.615 START TEST nvmf_fio_host 00:18:19.615 ************************************ 00:18:19.615 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:19.615 * Looking for test storage... 00:18:19.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.874 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:19.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.875 --rc genhtml_branch_coverage=1 00:18:19.875 --rc genhtml_function_coverage=1 00:18:19.875 --rc genhtml_legend=1 00:18:19.875 --rc geninfo_all_blocks=1 00:18:19.875 --rc geninfo_unexecuted_blocks=1 00:18:19.875 00:18:19.875 ' 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:19.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.875 --rc genhtml_branch_coverage=1 00:18:19.875 --rc genhtml_function_coverage=1 00:18:19.875 --rc genhtml_legend=1 00:18:19.875 --rc geninfo_all_blocks=1 00:18:19.875 --rc geninfo_unexecuted_blocks=1 00:18:19.875 00:18:19.875 ' 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:19.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.875 --rc genhtml_branch_coverage=1 00:18:19.875 --rc genhtml_function_coverage=1 00:18:19.875 --rc genhtml_legend=1 00:18:19.875 --rc geninfo_all_blocks=1 00:18:19.875 --rc geninfo_unexecuted_blocks=1 00:18:19.875 00:18:19.875 ' 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:19.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.875 --rc genhtml_branch_coverage=1 00:18:19.875 --rc genhtml_function_coverage=1 00:18:19.875 --rc genhtml_legend=1 00:18:19.875 --rc geninfo_all_blocks=1 00:18:19.875 --rc geninfo_unexecuted_blocks=1 00:18:19.875 00:18:19.875 ' 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.875 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.876 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:19.876 Cannot find device "nvmf_init_br" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:19.876 Cannot find device "nvmf_init_br2" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:19.876 Cannot find device "nvmf_tgt_br" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.876 Cannot find device "nvmf_tgt_br2" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:19.876 Cannot find device "nvmf_init_br" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:19.876 Cannot find device "nvmf_init_br2" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:19.876 Cannot find device "nvmf_tgt_br" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:19.876 Cannot find device "nvmf_tgt_br2" 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:19.876 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:20.134 Cannot find device "nvmf_br" 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:20.134 Cannot find device "nvmf_init_if" 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:20.134 Cannot find device "nvmf_init_if2" 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:20.134 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:20.135 00:34:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:20.135 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.135 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:18:20.135 00:18:20.135 --- 10.0.0.3 ping statistics --- 00:18:20.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.135 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:20.135 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:20.393 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:20.393 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:18:20.393 00:18:20.393 --- 10.0.0.4 ping statistics --- 00:18:20.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.393 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:20.393 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:20.394 00:18:20.394 --- 10.0.0.1 ping statistics --- 00:18:20.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.394 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:20.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:20.394 00:18:20.394 --- 10.0.0.2 ping statistics --- 00:18:20.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.394 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88973 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88973 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 88973 ']' 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.394 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.394 [2024-12-17 00:34:06.236141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:20.394 [2024-12-17 00:34:06.236230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.394 [2024-12-17 00:34:06.378279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.652 [2024-12-17 00:34:06.419973] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.652 [2024-12-17 00:34:06.420203] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.652 [2024-12-17 00:34:06.420396] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.652 [2024-12-17 00:34:06.420569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.652 [2024-12-17 00:34:06.420615] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.652 [2024-12-17 00:34:06.420938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.652 [2024-12-17 00:34:06.420985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.652 [2024-12-17 00:34:06.421199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.652 [2024-12-17 00:34:06.421060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.652 [2024-12-17 00:34:06.454284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:20.652 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.652 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:20.652 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:20.910 [2024-12-17 00:34:06.808409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.910 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:20.910 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.910 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.910 00:34:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:21.169 Malloc1 00:18:21.169 00:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:21.735 00:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:21.735 00:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:21.993 [2024-12-17 00:34:07.916239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:21.993 00:34:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:22.251 00:34:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:22.508 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:22.508 fio-3.35 00:18:22.508 Starting 1 thread 00:18:25.046 00:18:25.046 test: (groupid=0, jobs=1): err= 0: pid=89043: Tue Dec 17 00:34:10 2024 00:18:25.046 read: IOPS=9455, BW=36.9MiB/s (38.7MB/s)(74.1MiB/2006msec) 00:18:25.046 slat (nsec): min=1807, max=315103, avg=2228.36, stdev=3064.89 00:18:25.046 clat (usec): min=2534, max=12284, avg=7039.02, stdev=611.10 00:18:25.046 lat (usec): min=2559, max=12286, avg=7041.25, stdev=610.90 00:18:25.046 clat percentiles (usec): 00:18:25.046 | 1.00th=[ 5932], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:18:25.046 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:18:25.046 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8094], 00:18:25.046 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11338], 99.95th=[11731], 00:18:25.046 | 99.99th=[12256] 00:18:25.046 bw ( KiB/s): min=36336, max=38800, per=99.93%, avg=37796.00, stdev=1080.55, samples=4 00:18:25.046 iops : min= 9084, max= 9700, avg=9449.00, stdev=270.14, samples=4 00:18:25.046 write: IOPS=9455, BW=36.9MiB/s (38.7MB/s)(74.1MiB/2006msec); 0 zone resets 00:18:25.046 slat (nsec): min=1857, max=281562, avg=2337.06, stdev=2433.22 00:18:25.046 clat (usec): min=2434, max=12167, avg=6417.16, stdev=557.56 00:18:25.046 lat (usec): min=2448, max=12169, avg=6419.49, stdev=557.45 00:18:25.046 clat percentiles (usec): 00:18:25.046 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:18:25.046 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:18:25.046 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7111], 95.00th=[ 7373], 00:18:25.046 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[ 9503], 99.95th=[10814], 00:18:25.046 | 99.99th=[11994] 00:18:25.046 bw ( KiB/s): min=37256, max=38760, per=100.00%, avg=37820.00, stdev=659.58, samples=4 00:18:25.046 iops : min= 9314, max= 9690, avg=9455.00, stdev=164.90, samples=4 00:18:25.046 lat (msec) : 4=0.18%, 10=99.70%, 20=0.11% 00:18:25.046 cpu : usr=71.07%, sys=22.44%, ctx=43, majf=0, minf=6 00:18:25.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:25.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.046 issued rwts: total=18968,18967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.046 00:18:25.046 Run status group 0 (all jobs): 00:18:25.046 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.1MiB (77.7MB), run=2006-2006msec 00:18:25.046 WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.1MiB (77.7MB), run=2006-2006msec 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:25.046 00:34:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:25.046 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:25.046 fio-3.35 00:18:25.046 Starting 1 thread 00:18:27.575 00:18:27.575 test: (groupid=0, jobs=1): err= 0: pid=89089: Tue Dec 17 00:34:13 2024 00:18:27.575 read: IOPS=8617, BW=135MiB/s (141MB/s)(270MiB/2004msec) 00:18:27.575 slat (usec): min=3, max=110, avg= 3.50, stdev= 1.81 00:18:27.575 clat (usec): min=166, max=19778, avg=8341.33, stdev=2613.34 00:18:27.575 lat (usec): min=175, max=19782, avg=8344.83, stdev=2613.39 00:18:27.575 clat percentiles (usec): 00:18:27.575 | 1.00th=[ 3884], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5997], 00:18:27.575 | 30.00th=[ 6783], 40.00th=[ 7439], 50.00th=[ 8094], 60.00th=[ 8717], 00:18:27.575 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11863], 95.00th=[13304], 00:18:27.575 | 99.00th=[15401], 99.50th=[15926], 99.90th=[17171], 99.95th=[17171], 00:18:27.575 | 99.99th=[19792] 00:18:27.575 bw ( KiB/s): min=63200, max=79680, per=52.03%, avg=71736.00, stdev=6953.82, samples=4 00:18:27.575 iops : min= 3950, max= 4980, avg=4483.50, stdev=434.61, samples=4 00:18:27.576 write: IOPS=5102, BW=79.7MiB/s (83.6MB/s)(146MiB/1830msec); 0 zone resets 00:18:27.576 slat (usec): min=33, max=162, avg=36.14, stdev= 5.69 00:18:27.576 clat (usec): min=3770, max=19540, avg=11186.48, stdev=2054.09 00:18:27.576 lat (usec): min=3803, max=19573, avg=11222.62, stdev=2054.10 00:18:27.576 clat percentiles (usec): 00:18:27.576 | 1.00th=[ 6915], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9503], 00:18:27.576 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:18:27.576 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14091], 95.00th=[14877], 00:18:27.576 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17957], 99.95th=[18482], 00:18:27.576 | 99.99th=[19530] 00:18:27.576 bw ( KiB/s): min=65920, max=83904, per=91.50%, avg=74696.00, stdev=7567.28, samples=4 00:18:27.576 iops : min= 4120, max= 5244, avg=4668.50, stdev=472.95, samples=4 00:18:27.576 lat (usec) : 250=0.01% 00:18:27.576 lat (msec) : 4=0.89%, 10=58.58%, 20=40.53% 00:18:27.576 cpu : usr=82.83%, sys=13.43%, ctx=27, majf=0, minf=2 00:18:27.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:27.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.576 issued rwts: total=17269,9337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.576 00:18:27.576 Run status group 0 (all jobs): 00:18:27.576 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=270MiB (283MB), run=2004-2004msec 00:18:27.576 WRITE: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=146MiB (153MB), run=1830-1830msec 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:27.576 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:27.834 Nvme0n1 00:18:27.834 00:34:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:28.092 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9dbb33f2-808e-4c2f-8ad7-ff1c95a31d06 00:18:28.092 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9dbb33f2-808e-4c2f-8ad7-ff1c95a31d06 00:18:28.092 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=9dbb33f2-808e-4c2f-8ad7-ff1c95a31d06 00:18:28.092 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:28.092 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:28.092 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:28.092 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:28.350 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:28.350 { 00:18:28.350 "uuid": "9dbb33f2-808e-4c2f-8ad7-ff1c95a31d06", 00:18:28.350 "name": "lvs_0", 00:18:28.350 "base_bdev": "Nvme0n1", 00:18:28.350 "total_data_clusters": 4, 00:18:28.350 "free_clusters": 4, 00:18:28.350 "block_size": 4096, 00:18:28.350 "cluster_size": 1073741824 00:18:28.350 } 00:18:28.350 ]' 00:18:28.608 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="9dbb33f2-808e-4c2f-8ad7-ff1c95a31d06") .free_clusters' 00:18:28.608 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:28.608 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="9dbb33f2-808e-4c2f-8ad7-ff1c95a31d06") .cluster_size' 00:18:28.608 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:28.608 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:28.608 4096 00:18:28.608 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:28.608 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:28.866 983e9b86-e5e3-4ce0-8bf9-dc6cda01830b 00:18:28.866 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:29.124 00:34:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:29.381 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:29.639 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:29.640 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:29.640 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:29.640 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:29.640 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:29.640 00:34:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:29.640 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:29.640 fio-3.35 00:18:29.640 Starting 1 thread 00:18:32.168 00:18:32.168 test: (groupid=0, jobs=1): err= 0: pid=89199: Tue Dec 17 00:34:17 2024 00:18:32.168 read: IOPS=6245, BW=24.4MiB/s (25.6MB/s)(49.0MiB/2009msec) 00:18:32.168 slat (nsec): min=2000, max=290464, avg=2799.40, stdev=3615.16 00:18:32.168 clat (usec): min=2863, max=19583, avg=10717.76, stdev=888.99 00:18:32.168 lat (usec): min=2870, max=19585, avg=10720.56, stdev=888.69 00:18:32.168 clat percentiles (usec): 00:18:32.168 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:18:32.168 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:18:32.168 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:18:32.168 | 99.00th=[12649], 99.50th=[13042], 99.90th=[17433], 99.95th=[18482], 00:18:32.168 | 99.99th=[19530] 00:18:32.168 bw ( KiB/s): min=24192, max=25448, per=99.90%, avg=24956.00, stdev=536.36, samples=4 00:18:32.168 iops : min= 6048, max= 6362, avg=6239.00, stdev=134.09, samples=4 00:18:32.168 write: IOPS=6237, BW=24.4MiB/s (25.5MB/s)(48.9MiB/2009msec); 0 zone resets 00:18:32.168 slat (usec): min=2, max=230, avg= 2.92, stdev= 2.75 00:18:32.168 clat (usec): min=2046, max=18612, avg=9721.09, stdev=824.45 00:18:32.168 lat (usec): min=2057, max=18614, avg=9724.01, stdev=824.29 00:18:32.168 clat percentiles (usec): 00:18:32.168 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:18:32.168 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:18:32.168 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:18:32.168 | 99.00th=[11469], 99.50th=[11731], 99.90th=[15270], 99.95th=[17171], 00:18:32.168 | 99.99th=[18482] 00:18:32.168 bw ( KiB/s): min=24704, max=25176, per=99.99%, avg=24946.00, stdev=201.58, samples=4 00:18:32.168 iops : min= 6176, max= 6294, avg=6236.50, stdev=50.40, samples=4 00:18:32.168 lat (msec) : 4=0.06%, 10=41.20%, 20=58.74% 00:18:32.168 cpu : usr=71.36%, sys=22.91%, ctx=5, majf=0, minf=6 00:18:32.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:32.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.168 issued rwts: total=12547,12531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.168 00:18:32.168 Run status group 0 (all jobs): 00:18:32.168 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.0MiB (51.4MB), run=2009-2009msec 00:18:32.168 WRITE: bw=24.4MiB/s (25.5MB/s), 24.4MiB/s-24.4MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2009-2009msec 00:18:32.168 00:34:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:32.427 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:32.685 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=08b33d25-378a-44f2-94e2-3f096a3e63fa 00:18:32.685 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 08b33d25-378a-44f2-94e2-3f096a3e63fa 00:18:32.685 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=08b33d25-378a-44f2-94e2-3f096a3e63fa 00:18:32.685 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:32.685 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:32.685 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:32.685 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:32.944 { 00:18:32.944 "uuid": "9dbb33f2-808e-4c2f-8ad7-ff1c95a31d06", 00:18:32.944 "name": "lvs_0", 00:18:32.944 "base_bdev": "Nvme0n1", 00:18:32.944 "total_data_clusters": 4, 00:18:32.944 "free_clusters": 0, 00:18:32.944 "block_size": 4096, 00:18:32.944 "cluster_size": 1073741824 00:18:32.944 }, 00:18:32.944 { 00:18:32.944 "uuid": "08b33d25-378a-44f2-94e2-3f096a3e63fa", 00:18:32.944 "name": "lvs_n_0", 00:18:32.944 "base_bdev": "983e9b86-e5e3-4ce0-8bf9-dc6cda01830b", 00:18:32.944 "total_data_clusters": 1022, 00:18:32.944 "free_clusters": 1022, 00:18:32.944 "block_size": 4096, 00:18:32.944 "cluster_size": 4194304 00:18:32.944 } 00:18:32.944 ]' 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="08b33d25-378a-44f2-94e2-3f096a3e63fa") .free_clusters' 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="08b33d25-378a-44f2-94e2-3f096a3e63fa") .cluster_size' 00:18:32.944 4088 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:18:32.944 00:34:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:33.202 7bb06e2f-be11-41b0-bdba-cfed2aa4b03d 00:18:33.202 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:33.461 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:33.719 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:33.978 00:34:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:33.978 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:33.978 fio-3.35 00:18:33.978 Starting 1 thread 00:18:36.510 00:18:36.510 test: (groupid=0, jobs=1): err= 0: pid=89274: Tue Dec 17 00:34:22 2024 00:18:36.510 read: IOPS=5711, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2010msec) 00:18:36.511 slat (nsec): min=1967, max=311644, avg=2621.51, stdev=3958.20 00:18:36.511 clat (usec): min=3334, max=20851, avg=11746.19, stdev=966.23 00:18:36.511 lat (usec): min=3343, max=20853, avg=11748.81, stdev=965.95 00:18:36.511 clat percentiles (usec): 00:18:36.511 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10683], 20.00th=[10945], 00:18:36.511 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:18:36.511 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:18:36.511 | 99.00th=[13829], 99.50th=[14353], 99.90th=[18744], 99.95th=[19792], 00:18:36.511 | 99.99th=[20841] 00:18:36.511 bw ( KiB/s): min=21920, max=23384, per=100.00%, avg=22850.00, stdev=658.84, samples=4 00:18:36.511 iops : min= 5480, max= 5846, avg=5712.50, stdev=164.71, samples=4 00:18:36.511 write: IOPS=5700, BW=22.3MiB/s (23.3MB/s)(44.8MiB/2010msec); 0 zone resets 00:18:36.511 slat (usec): min=2, max=275, avg= 2.74, stdev= 3.17 00:18:36.511 clat (usec): min=2421, max=19961, avg=10629.69, stdev=939.90 00:18:36.511 lat (usec): min=2435, max=19964, avg=10632.43, stdev=939.74 00:18:36.511 clat percentiles (usec): 00:18:36.511 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:18:36.511 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:18:36.511 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11994], 00:18:36.511 | 99.00th=[12649], 99.50th=[13042], 99.90th=[18744], 99.95th=[19792], 00:18:36.511 | 99.99th=[19792] 00:18:36.511 bw ( KiB/s): min=22648, max=22896, per=99.87%, avg=22770.00, stdev=136.45, samples=4 00:18:36.511 iops : min= 5662, max= 5724, avg=5692.50, stdev=34.11, samples=4 00:18:36.511 lat (msec) : 4=0.05%, 10=12.44%, 20=87.50%, 50=0.01% 00:18:36.511 cpu : usr=75.36%, sys=19.76%, ctx=7, majf=0, minf=6 00:18:36.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:36.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:36.511 issued rwts: total=11480,11457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:36.511 00:18:36.511 Run status group 0 (all jobs): 00:18:36.511 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2010-2010msec 00:18:36.511 WRITE: bw=22.3MiB/s (23.3MB/s), 22.3MiB/s-22.3MiB/s (23.3MB/s-23.3MB/s), io=44.8MiB (46.9MB), run=2010-2010msec 00:18:36.511 00:34:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:36.769 00:34:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:36.769 00:34:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:37.028 00:34:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:37.287 00:34:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:18:37.545 00:34:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:37.803 00:34:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:38.739 rmmod nvme_tcp 00:18:38.739 rmmod nvme_fabrics 00:18:38.739 rmmod nvme_keyring 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 88973 ']' 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 88973 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 88973 ']' 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 88973 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88973 00:18:38.739 killing process with pid 88973 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88973' 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 88973 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 88973 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:38.739 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:38.998 00:18:38.998 real 0m19.423s 00:18:38.998 user 1m24.760s 00:18:38.998 sys 0m4.393s 00:18:38.998 ************************************ 00:18:38.998 END TEST nvmf_fio_host 00:18:38.998 ************************************ 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:38.998 00:34:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.263 ************************************ 00:18:39.263 START TEST nvmf_failover 00:18:39.263 ************************************ 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:39.263 * Looking for test storage... 00:18:39.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:39.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.263 --rc genhtml_branch_coverage=1 00:18:39.263 --rc genhtml_function_coverage=1 00:18:39.263 --rc genhtml_legend=1 00:18:39.263 --rc geninfo_all_blocks=1 00:18:39.263 --rc geninfo_unexecuted_blocks=1 00:18:39.263 00:18:39.263 ' 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:39.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.263 --rc genhtml_branch_coverage=1 00:18:39.263 --rc genhtml_function_coverage=1 00:18:39.263 --rc genhtml_legend=1 00:18:39.263 --rc geninfo_all_blocks=1 00:18:39.263 --rc geninfo_unexecuted_blocks=1 00:18:39.263 00:18:39.263 ' 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:39.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.263 --rc genhtml_branch_coverage=1 00:18:39.263 --rc genhtml_function_coverage=1 00:18:39.263 --rc genhtml_legend=1 00:18:39.263 --rc geninfo_all_blocks=1 00:18:39.263 --rc geninfo_unexecuted_blocks=1 00:18:39.263 00:18:39.263 ' 00:18:39.263 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:39.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.263 --rc genhtml_branch_coverage=1 00:18:39.263 --rc genhtml_function_coverage=1 00:18:39.263 --rc genhtml_legend=1 00:18:39.264 --rc geninfo_all_blocks=1 00:18:39.264 --rc geninfo_unexecuted_blocks=1 00:18:39.264 00:18:39.264 ' 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.264 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:39.264 Cannot find device "nvmf_init_br" 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:39.264 Cannot find device "nvmf_init_br2" 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:39.264 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:39.522 Cannot find device "nvmf_tgt_br" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:39.522 Cannot find device "nvmf_tgt_br2" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:39.522 Cannot find device "nvmf_init_br" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:39.522 Cannot find device "nvmf_init_br2" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:39.522 Cannot find device "nvmf_tgt_br" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:39.522 Cannot find device "nvmf_tgt_br2" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:39.522 Cannot find device "nvmf_br" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:39.522 Cannot find device "nvmf_init_if" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:39.522 Cannot find device "nvmf_init_if2" 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:39.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:39.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:39.522 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:39.523 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:39.781 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:39.781 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:18:39.781 00:18:39.781 --- 10.0.0.3 ping statistics --- 00:18:39.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.781 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:39.781 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:39.781 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:18:39.781 00:18:39.781 --- 10.0.0.4 ping statistics --- 00:18:39.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.781 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:39.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:39.781 00:18:39.781 --- 10.0.0.1 ping statistics --- 00:18:39.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.781 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:39.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:18:39.781 00:18:39.781 --- 10.0.0.2 ping statistics --- 00:18:39.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.781 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:39.781 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=89567 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 89567 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89567 ']' 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.782 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:39.782 [2024-12-17 00:34:25.714201] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:39.782 [2024-12-17 00:34:25.714543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.040 [2024-12-17 00:34:25.863499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:40.040 [2024-12-17 00:34:25.904366] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.040 [2024-12-17 00:34:25.904425] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.040 [2024-12-17 00:34:25.904439] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.040 [2024-12-17 00:34:25.904449] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.040 [2024-12-17 00:34:25.904459] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.040 [2024-12-17 00:34:25.904629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.040 [2024-12-17 00:34:25.905462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.040 [2024-12-17 00:34:25.905477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.040 [2024-12-17 00:34:25.937993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.040 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.040 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:18:40.040 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:40.040 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.040 00:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:40.040 00:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.040 00:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:40.607 [2024-12-17 00:34:26.303969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.607 00:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:40.607 Malloc0 00:18:40.866 00:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:40.866 00:34:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.125 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:41.383 [2024-12-17 00:34:27.288755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:41.383 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:41.642 [2024-12-17 00:34:27.568985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:41.642 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:41.900 [2024-12-17 00:34:27.793139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89617 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89617 /var/tmp/bdevperf.sock 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89617 ']' 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.900 00:34:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:42.159 00:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:42.159 00:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:18:42.159 00:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:42.417 NVMe0n1 00:18:42.417 00:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:42.999 00:18:42.999 00:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.999 00:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89633 00:18:42.999 00:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:43.946 00:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:44.205 [2024-12-17 00:34:29.972379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.205 [2024-12-17 00:34:29.972943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.972950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.972957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.972965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.972973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.972980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.972987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.972994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 [2024-12-17 00:34:29.973318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d600 is same with the state(6) to be set 00:18:44.206 00:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:47.501 00:34:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:47.501 00:18:47.501 00:34:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:47.759 00:34:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:51.041 00:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:51.041 [2024-12-17 00:34:36.881189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:51.041 00:34:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:51.976 00:34:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:52.234 00:34:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89633 00:18:58.802 { 00:18:58.802 "results": [ 00:18:58.802 { 00:18:58.802 "job": "NVMe0n1", 00:18:58.802 "core_mask": "0x1", 00:18:58.802 "workload": "verify", 00:18:58.802 "status": "finished", 00:18:58.802 "verify_range": { 00:18:58.802 "start": 0, 00:18:58.802 "length": 16384 00:18:58.802 }, 00:18:58.802 "queue_depth": 128, 00:18:58.802 "io_size": 4096, 00:18:58.802 "runtime": 15.008634, 00:18:58.802 "iops": 10079.731439916517, 00:18:58.802 "mibps": 39.373950937173895, 00:18:58.802 "io_failed": 3277, 00:18:58.802 "io_timeout": 0, 00:18:58.803 "avg_latency_us": 12401.156059476756, 00:18:58.803 "min_latency_us": 554.8218181818182, 00:18:58.803 "max_latency_us": 15847.796363636364 00:18:58.803 } 00:18:58.803 ], 00:18:58.803 "core_count": 1 00:18:58.803 } 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89617 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89617 ']' 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89617 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89617 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:58.803 killing process with pid 89617 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89617' 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89617 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89617 00:18:58.803 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:58.803 [2024-12-17 00:34:27.856430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:58.803 [2024-12-17 00:34:27.856555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89617 ] 00:18:58.803 [2024-12-17 00:34:27.988356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.803 [2024-12-17 00:34:28.022437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.803 [2024-12-17 00:34:28.050826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:58.803 Running I/O for 15 seconds... 00:18:58.803 7588.00 IOPS, 29.64 MiB/s [2024-12-17T00:34:44.806Z] [2024-12-17 00:34:29.973401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.973983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.973996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.803 [2024-12-17 00:34:29.974347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.803 [2024-12-17 00:34:29.974362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.974978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.974993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.804 [2024-12-17 00:34:29.975571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.804 [2024-12-17 00:34:29.975586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.975972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.975986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.805 [2024-12-17 00:34:29.976770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.805 [2024-12-17 00:34:29.976793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.805 [2024-12-17 00:34:29.976808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.976838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.976852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.976867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.976880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.976895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.976909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.976924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.976937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.976952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.976966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.976980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.976994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:29.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:29.977263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2a540 is same with the state(6) to be set 00:18:58.806 [2024-12-17 00:34:29.977294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.806 [2024-12-17 00:34:29.977304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.806 [2024-12-17 00:34:29.977330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72864 len:8 PRP1 0x0 PRP2 0x0 00:18:58.806 [2024-12-17 00:34:29.977344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977405] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f2a540 was disconnected and freed. reset controller. 00:18:58.806 [2024-12-17 00:34:29.977424] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:58.806 [2024-12-17 00:34:29.977477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.806 [2024-12-17 00:34:29.977499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.806 [2024-12-17 00:34:29.977528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.806 [2024-12-17 00:34:29.977556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.806 [2024-12-17 00:34:29.977585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:29.977599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.806 [2024-12-17 00:34:29.981269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.806 [2024-12-17 00:34:29.981305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09f10 (9): Bad file descriptor 00:18:58.806 [2024-12-17 00:34:30.021612] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.806 8491.50 IOPS, 33.17 MiB/s [2024-12-17T00:34:44.809Z] 9134.33 IOPS, 35.68 MiB/s [2024-12-17T00:34:44.809Z] 9478.75 IOPS, 37.03 MiB/s [2024-12-17T00:34:44.809Z] [2024-12-17 00:34:33.603187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.806 [2024-12-17 00:34:33.603533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.806 [2024-12-17 00:34:33.603758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.806 [2024-12-17 00:34:33.603771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.603786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.603800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.603815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.603831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.603847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.603860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.603875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.603889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.603904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.603918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.603932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.603946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.603960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.603991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.807 [2024-12-17 00:34:33.604806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.807 [2024-12-17 00:34:33.604853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.807 [2024-12-17 00:34:33.604882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.604897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.604911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.604934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.604949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.604964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.604978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.604993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.808 [2024-12-17 00:34:33.605801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.605980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.605996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.606010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.606026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.606040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.606055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.606075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.606091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.808 [2024-12-17 00:34:33.606105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.808 [2024-12-17 00:34:33.606121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.606867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.606981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.606996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.607010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.607040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.607070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.607099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.809 [2024-12-17 00:34:33.607136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.607166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.607194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.607223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.607253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.607282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.607311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.809 [2024-12-17 00:34:33.607347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.809 [2024-12-17 00:34:33.607417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.809 [2024-12-17 00:34:33.607428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113920 len:8 PRP1 0x0 PRP2 0x0 00:18:58.809 [2024-12-17 00:34:33.607442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607488] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f2dbb0 was disconnected and freed. reset controller. 00:18:58.809 [2024-12-17 00:34:33.607508] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:58.809 [2024-12-17 00:34:33.607561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.809 [2024-12-17 00:34:33.607583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.809 [2024-12-17 00:34:33.607598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.809 [2024-12-17 00:34:33.607612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:33.607626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.810 [2024-12-17 00:34:33.607640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:33.607654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.810 [2024-12-17 00:34:33.607667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:33.607680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.810 [2024-12-17 00:34:33.607715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09f10 (9): Bad file descriptor 00:18:58.810 [2024-12-17 00:34:33.611520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.810 [2024-12-17 00:34:33.643412] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.810 9436.00 IOPS, 36.86 MiB/s [2024-12-17T00:34:44.813Z] 9575.33 IOPS, 37.40 MiB/s [2024-12-17T00:34:44.813Z] 9684.00 IOPS, 37.83 MiB/s [2024-12-17T00:34:44.813Z] 9763.50 IOPS, 38.14 MiB/s [2024-12-17T00:34:44.813Z] 9825.33 IOPS, 38.38 MiB/s [2024-12-17T00:34:44.813Z] [2024-12-17 00:34:38.182892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.182952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.810 [2024-12-17 00:34:38.183832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.183983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.183999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.184012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.184042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.810 [2024-12-17 00:34:38.184055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.810 [2024-12-17 00:34:38.184069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.811 [2024-12-17 00:34:38.184453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.184974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.184989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.185002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.185016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.185036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.185051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.185073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.185087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.185100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.185115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.185127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.811 [2024-12-17 00:34:38.185142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.811 [2024-12-17 00:34:38.185155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.812 [2024-12-17 00:34:38.185928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.185984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.185998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.812 [2024-12-17 00:34:38.186367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.812 [2024-12-17 00:34:38.186382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:58.813 [2024-12-17 00:34:38.186395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.813 [2024-12-17 00:34:38.186821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:58.813 [2024-12-17 00:34:38.186883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:58.813 [2024-12-17 00:34:38.186894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103176 len:8 PRP1 0x0 PRP2 0x0 00:18:58.813 [2024-12-17 00:34:38.186906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.186951] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f2d870 was disconnected and freed. reset controller. 00:18:58.813 [2024-12-17 00:34:38.186978] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:58.813 [2024-12-17 00:34:38.187027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.813 [2024-12-17 00:34:38.187048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.187063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.813 [2024-12-17 00:34:38.187076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.187089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.813 [2024-12-17 00:34:38.187102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.187116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:58.813 [2024-12-17 00:34:38.187129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:58.813 [2024-12-17 00:34:38.187141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.813 [2024-12-17 00:34:38.190598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:58.813 [2024-12-17 00:34:38.190635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09f10 (9): Bad file descriptor 00:18:58.813 [2024-12-17 00:34:38.222600] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:58.813 9850.70 IOPS, 38.48 MiB/s [2024-12-17T00:34:44.816Z] 9922.45 IOPS, 38.76 MiB/s [2024-12-17T00:34:44.816Z] 9965.75 IOPS, 38.93 MiB/s [2024-12-17T00:34:44.816Z] 10007.62 IOPS, 39.09 MiB/s [2024-12-17T00:34:44.816Z] 10046.50 IOPS, 39.24 MiB/s [2024-12-17T00:34:44.816Z] 10079.40 IOPS, 39.37 MiB/s 00:18:58.813 Latency(us) 00:18:58.813 [2024-12-17T00:34:44.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.813 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:58.813 Verification LBA range: start 0x0 length 0x4000 00:18:58.813 NVMe0n1 : 15.01 10079.73 39.37 218.34 0.00 12401.16 554.82 15847.80 00:18:58.813 [2024-12-17T00:34:44.816Z] =================================================================================================================== 00:18:58.813 [2024-12-17T00:34:44.816Z] Total : 10079.73 39.37 218.34 0.00 12401.16 554.82 15847.80 00:18:58.813 Received shutdown signal, test time was about 15.000000 seconds 00:18:58.813 00:18:58.813 Latency(us) 00:18:58.813 [2024-12-17T00:34:44.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.813 [2024-12-17T00:34:44.816Z] =================================================================================================================== 00:18:58.813 [2024-12-17T00:34:44.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89806 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89806 /var/tmp/bdevperf.sock 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89806 ']' 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.813 00:34:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:58.813 00:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.813 00:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:18:58.813 00:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:58.813 [2024-12-17 00:34:44.554793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:58.813 00:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:58.813 [2024-12-17 00:34:44.786958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:59.080 00:34:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:59.339 NVMe0n1 00:18:59.339 00:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:59.596 00:18:59.596 00:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:59.855 00:18:59.855 00:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:59.855 00:34:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:00.114 00:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:00.372 00:34:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:03.655 00:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.655 00:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:03.655 00:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.655 00:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89877 00:19:03.655 00:34:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89877 00:19:05.029 { 00:19:05.030 "results": [ 00:19:05.030 { 00:19:05.030 "job": "NVMe0n1", 00:19:05.030 "core_mask": "0x1", 00:19:05.030 "workload": "verify", 00:19:05.030 "status": "finished", 00:19:05.030 "verify_range": { 00:19:05.030 "start": 0, 00:19:05.030 "length": 16384 00:19:05.030 }, 00:19:05.030 "queue_depth": 128, 00:19:05.030 "io_size": 4096, 00:19:05.030 "runtime": 1.016376, 00:19:05.030 "iops": 7827.811754704951, 00:19:05.030 "mibps": 30.577389666816217, 00:19:05.030 "io_failed": 0, 00:19:05.030 "io_timeout": 0, 00:19:05.030 "avg_latency_us": 16290.815485168427, 00:19:05.030 "min_latency_us": 2025.658181818182, 00:19:05.030 "max_latency_us": 13702.981818181817 00:19:05.030 } 00:19:05.030 ], 00:19:05.030 "core_count": 1 00:19:05.030 } 00:19:05.030 00:34:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:05.030 [2024-12-17 00:34:44.041560] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:05.030 [2024-12-17 00:34:44.042051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89806 ] 00:19:05.030 [2024-12-17 00:34:44.170608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.030 [2024-12-17 00:34:44.204102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.030 [2024-12-17 00:34:44.231837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.030 [2024-12-17 00:34:46.216854] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:05.030 [2024-12-17 00:34:46.217491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.030 [2024-12-17 00:34:46.217603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.030 [2024-12-17 00:34:46.217691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.030 [2024-12-17 00:34:46.217802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.030 [2024-12-17 00:34:46.217870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.030 [2024-12-17 00:34:46.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.030 [2024-12-17 00:34:46.218011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:05.030 [2024-12-17 00:34:46.218084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:05.030 [2024-12-17 00:34:46.218151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:05.030 [2024-12-17 00:34:46.218266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.030 [2024-12-17 00:34:46.218428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2321f10 (9): Bad file descriptor 00:19:05.030 [2024-12-17 00:34:46.229051] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:05.030 Running I/O for 1 seconds... 00:19:05.030 7828.00 IOPS, 30.58 MiB/s 00:19:05.030 Latency(us) 00:19:05.030 [2024-12-17T00:34:51.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.030 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:05.030 Verification LBA range: start 0x0 length 0x4000 00:19:05.030 NVMe0n1 : 1.02 7827.81 30.58 0.00 0.00 16290.82 2025.66 13702.98 00:19:05.030 [2024-12-17T00:34:51.033Z] =================================================================================================================== 00:19:05.030 [2024-12-17T00:34:51.033Z] Total : 7827.81 30.58 0.00 0.00 16290.82 2025.66 13702.98 00:19:05.030 00:34:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:05.030 00:34:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:05.030 00:34:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:05.288 00:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:05.288 00:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:05.547 00:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:05.805 00:34:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89806 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89806 ']' 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89806 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.090 00:34:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89806 00:19:09.090 killing process with pid 89806 00:19:09.090 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.090 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.090 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89806' 00:19:09.090 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89806 00:19:09.090 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89806 00:19:09.348 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:09.348 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.606 rmmod nvme_tcp 00:19:09.606 rmmod nvme_fabrics 00:19:09.606 rmmod nvme_keyring 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 89567 ']' 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 89567 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89567 ']' 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89567 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89567 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89567' 00:19:09.606 killing process with pid 89567 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89567 00:19:09.606 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89567 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:09.865 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:10.123 00:19:10.123 real 0m30.932s 00:19:10.123 user 1m59.212s 00:19:10.123 sys 0m5.273s 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:10.123 00:34:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 ************************************ 00:19:10.123 END TEST nvmf_failover 00:19:10.123 ************************************ 00:19:10.124 00:34:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:10.124 00:34:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:10.124 00:34:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:10.124 00:34:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.124 ************************************ 00:19:10.124 START TEST nvmf_host_discovery 00:19:10.124 ************************************ 00:19:10.124 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:10.124 * Looking for test storage... 00:19:10.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:10.124 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:10.124 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:19:10.124 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:10.383 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:10.383 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:10.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.384 --rc genhtml_branch_coverage=1 00:19:10.384 --rc genhtml_function_coverage=1 00:19:10.384 --rc genhtml_legend=1 00:19:10.384 --rc geninfo_all_blocks=1 00:19:10.384 --rc geninfo_unexecuted_blocks=1 00:19:10.384 00:19:10.384 ' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:10.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.384 --rc genhtml_branch_coverage=1 00:19:10.384 --rc genhtml_function_coverage=1 00:19:10.384 --rc genhtml_legend=1 00:19:10.384 --rc geninfo_all_blocks=1 00:19:10.384 --rc geninfo_unexecuted_blocks=1 00:19:10.384 00:19:10.384 ' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:10.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.384 --rc genhtml_branch_coverage=1 00:19:10.384 --rc genhtml_function_coverage=1 00:19:10.384 --rc genhtml_legend=1 00:19:10.384 --rc geninfo_all_blocks=1 00:19:10.384 --rc geninfo_unexecuted_blocks=1 00:19:10.384 00:19:10.384 ' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:10.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.384 --rc genhtml_branch_coverage=1 00:19:10.384 --rc genhtml_function_coverage=1 00:19:10.384 --rc genhtml_legend=1 00:19:10.384 --rc geninfo_all_blocks=1 00:19:10.384 --rc geninfo_unexecuted_blocks=1 00:19:10.384 00:19:10.384 ' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.384 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:10.384 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:10.385 Cannot find device "nvmf_init_br" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:10.385 Cannot find device "nvmf_init_br2" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:10.385 Cannot find device "nvmf_tgt_br" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.385 Cannot find device "nvmf_tgt_br2" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:10.385 Cannot find device "nvmf_init_br" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:10.385 Cannot find device "nvmf_init_br2" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:10.385 Cannot find device "nvmf_tgt_br" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:10.385 Cannot find device "nvmf_tgt_br2" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:10.385 Cannot find device "nvmf_br" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:10.385 Cannot find device "nvmf_init_if" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:10.385 Cannot find device "nvmf_init_if2" 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.385 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:10.644 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:10.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:19:10.645 00:19:10.645 --- 10.0.0.3 ping statistics --- 00:19:10.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.645 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:10.645 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:10.645 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:19:10.645 00:19:10.645 --- 10.0.0.4 ping statistics --- 00:19:10.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.645 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:10.645 00:19:10.645 --- 10.0.0.1 ping statistics --- 00:19:10.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.645 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:10.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:10.645 00:19:10.645 --- 10.0.0.2 ping statistics --- 00:19:10.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.645 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=90206 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 90206 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90206 ']' 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.645 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:10.645 [2024-12-17 00:34:56.620729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:10.645 [2024-12-17 00:34:56.620801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.904 [2024-12-17 00:34:56.750462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.904 [2024-12-17 00:34:56.783011] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.904 [2024-12-17 00:34:56.783077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.904 [2024-12-17 00:34:56.783103] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.904 [2024-12-17 00:34:56.783110] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.904 [2024-12-17 00:34:56.783116] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.904 [2024-12-17 00:34:56.783141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.904 [2024-12-17 00:34:56.809955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.904 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.904 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:10.904 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:10.904 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.904 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 [2024-12-17 00:34:56.928875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 [2024-12-17 00:34:56.937011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 null0 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 null1 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90225 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90225 /tmp/host.sock 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90225 ']' 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.163 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.163 00:34:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 [2024-12-17 00:34:57.026542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:11.164 [2024-12-17 00:34:57.026652] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90225 ] 00:19:11.164 [2024-12-17 00:34:57.159268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.422 [2024-12-17 00:34:57.196729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.422 [2024-12-17 00:34:57.227231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:12.358 00:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.358 00:34:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:12.358 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.359 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 [2024-12-17 00:34:58.365373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:12.618 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:12.619 00:34:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:13.186 [2024-12-17 00:34:59.031631] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:13.186 [2024-12-17 00:34:59.031676] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:13.186 [2024-12-17 00:34:59.031694] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:13.186 [2024-12-17 00:34:59.037770] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:13.186 [2024-12-17 00:34:59.094328] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:13.186 [2024-12-17 00:34:59.094365] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:13.793 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.794 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:13.794 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.794 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.053 [2024-12-17 00:34:59.954760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:14.053 [2024-12-17 00:34:59.955122] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:14.053 [2024-12-17 00:34:59.955157] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:14.053 [2024-12-17 00:34:59.961128] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:14.053 00:34:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.053 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.053 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.053 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:14.054 [2024-12-17 00:35:00.025727] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:14.054 [2024-12-17 00:35:00.025755] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:14.054 [2024-12-17 00:35:00.025778] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:14.054 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.313 [2024-12-17 00:35:00.187877] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:14.313 [2024-12-17 00:35:00.187925] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:14.313 [2024-12-17 00:35:00.192231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.313 [2024-12-17 00:35:00.192265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.313 [2024-12-17 00:35:00.192295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.313 [2024-12-17 00:35:00.192304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.313 [2024-12-17 00:35:00.192313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.313 [2024-12-17 00:35:00.192350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.313 [2024-12-17 00:35:00.192362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.313 [2024-12-17 00:35:00.192371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.313 [2024-12-17 00:35:00.192380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa4480 is same with the state(6) to be set 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:14.313 [2024-12-17 00:35:00.193904] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:14.313 [2024-12-17 00:35:00.193949] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:14.313 [2024-12-17 00:35:00.194002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa4480 (9): Bad file descriptor 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:14.313 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:14.314 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:14.573 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.832 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:14.832 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:14.832 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:14.832 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:14.832 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:14.832 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.832 00:35:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.768 [2024-12-17 00:35:01.612284] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:15.768 [2024-12-17 00:35:01.612330] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:15.768 [2024-12-17 00:35:01.612364] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:15.768 [2024-12-17 00:35:01.618321] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:15.768 [2024-12-17 00:35:01.678834] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:15.768 [2024-12-17 00:35:01.678889] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.768 request: 00:19:15.768 { 00:19:15.768 "name": "nvme", 00:19:15.768 "trtype": "tcp", 00:19:15.768 "traddr": "10.0.0.3", 00:19:15.768 "adrfam": "ipv4", 00:19:15.768 "trsvcid": "8009", 00:19:15.768 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:15.768 "wait_for_attach": true, 00:19:15.768 "method": "bdev_nvme_start_discovery", 00:19:15.768 "req_id": 1 00:19:15.768 } 00:19:15.768 Got JSON-RPC error response 00:19:15.768 response: 00:19:15.768 { 00:19:15.768 "code": -17, 00:19:15.768 "message": "File exists" 00:19:15.768 } 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:15.768 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.027 request: 00:19:16.027 { 00:19:16.027 "name": "nvme_second", 00:19:16.027 "trtype": "tcp", 00:19:16.027 "traddr": "10.0.0.3", 00:19:16.027 "adrfam": "ipv4", 00:19:16.027 "trsvcid": "8009", 00:19:16.027 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:16.027 "wait_for_attach": true, 00:19:16.027 "method": "bdev_nvme_start_discovery", 00:19:16.027 "req_id": 1 00:19:16.027 } 00:19:16.027 Got JSON-RPC error response 00:19:16.027 response: 00:19:16.027 { 00:19:16.027 "code": -17, 00:19:16.027 "message": "File exists" 00:19:16.027 } 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.027 00:35:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.962 [2024-12-17 00:35:02.947580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.962 [2024-12-17 00:35:02.947656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f99db0 with addr=10.0.0.3, port=8010 00:19:16.962 [2024-12-17 00:35:02.947673] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:16.962 [2024-12-17 00:35:02.947681] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:16.963 [2024-12-17 00:35:02.947689] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:18.339 [2024-12-17 00:35:03.947580] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.339 [2024-12-17 00:35:03.947652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f99db0 with addr=10.0.0.3, port=8010 00:19:18.339 [2024-12-17 00:35:03.947668] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:18.339 [2024-12-17 00:35:03.947676] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:18.339 [2024-12-17 00:35:03.947684] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:19.274 [2024-12-17 00:35:04.947486] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:19.274 request: 00:19:19.274 { 00:19:19.274 "name": "nvme_second", 00:19:19.274 "trtype": "tcp", 00:19:19.274 "traddr": "10.0.0.3", 00:19:19.274 "adrfam": "ipv4", 00:19:19.274 "trsvcid": "8010", 00:19:19.274 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:19.274 "wait_for_attach": false, 00:19:19.274 "attach_timeout_ms": 3000, 00:19:19.274 "method": "bdev_nvme_start_discovery", 00:19:19.274 "req_id": 1 00:19:19.274 } 00:19:19.274 Got JSON-RPC error response 00:19:19.274 response: 00:19:19.274 { 00:19:19.274 "code": -110, 00:19:19.274 "message": "Connection timed out" 00:19:19.274 } 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:19.274 00:35:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90225 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.274 rmmod nvme_tcp 00:19:19.274 rmmod nvme_fabrics 00:19:19.274 rmmod nvme_keyring 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 90206 ']' 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 90206 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 90206 ']' 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 90206 00:19:19.274 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:19.275 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:19.275 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90206 00:19:19.275 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:19.275 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:19.275 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90206' 00:19:19.275 killing process with pid 90206 00:19:19.275 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 90206 00:19:19.275 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 90206 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:19.533 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:19.792 00:19:19.792 real 0m9.571s 00:19:19.792 user 0m18.509s 00:19:19.792 sys 0m1.907s 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:19.792 ************************************ 00:19:19.792 END TEST nvmf_host_discovery 00:19:19.792 ************************************ 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.792 ************************************ 00:19:19.792 START TEST nvmf_host_multipath_status 00:19:19.792 ************************************ 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:19.792 * Looking for test storage... 00:19:19.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:19.792 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:19.793 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:19:19.793 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.053 --rc genhtml_branch_coverage=1 00:19:20.053 --rc genhtml_function_coverage=1 00:19:20.053 --rc genhtml_legend=1 00:19:20.053 --rc geninfo_all_blocks=1 00:19:20.053 --rc geninfo_unexecuted_blocks=1 00:19:20.053 00:19:20.053 ' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.053 --rc genhtml_branch_coverage=1 00:19:20.053 --rc genhtml_function_coverage=1 00:19:20.053 --rc genhtml_legend=1 00:19:20.053 --rc geninfo_all_blocks=1 00:19:20.053 --rc geninfo_unexecuted_blocks=1 00:19:20.053 00:19:20.053 ' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.053 --rc genhtml_branch_coverage=1 00:19:20.053 --rc genhtml_function_coverage=1 00:19:20.053 --rc genhtml_legend=1 00:19:20.053 --rc geninfo_all_blocks=1 00:19:20.053 --rc geninfo_unexecuted_blocks=1 00:19:20.053 00:19:20.053 ' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:20.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.053 --rc genhtml_branch_coverage=1 00:19:20.053 --rc genhtml_function_coverage=1 00:19:20.053 --rc genhtml_legend=1 00:19:20.053 --rc geninfo_all_blocks=1 00:19:20.053 --rc geninfo_unexecuted_blocks=1 00:19:20.053 00:19:20.053 ' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:20.053 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:20.054 Cannot find device "nvmf_init_br" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:20.054 Cannot find device "nvmf_init_br2" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:20.054 Cannot find device "nvmf_tgt_br" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.054 Cannot find device "nvmf_tgt_br2" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:20.054 Cannot find device "nvmf_init_br" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:20.054 Cannot find device "nvmf_init_br2" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:20.054 Cannot find device "nvmf_tgt_br" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:20.054 Cannot find device "nvmf_tgt_br2" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:20.054 Cannot find device "nvmf_br" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:20.054 Cannot find device "nvmf_init_if" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:20.054 Cannot find device "nvmf_init_if2" 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.054 00:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:20.054 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:20.054 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:20.054 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:20.054 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:20.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:20.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:19:20.313 00:19:20.313 --- 10.0.0.3 ping statistics --- 00:19:20.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.313 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:20.313 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:20.313 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:20.313 00:19:20.313 --- 10.0.0.4 ping statistics --- 00:19:20.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.313 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:20.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:20.313 00:19:20.313 --- 10.0.0.1 ping statistics --- 00:19:20.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.313 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:20.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:20.313 00:19:20.313 --- 10.0.0.2 ping statistics --- 00:19:20.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.313 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=90724 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 90724 00:19:20.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.313 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90724 ']' 00:19:20.314 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.314 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.314 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.314 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.314 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:20.572 [2024-12-17 00:35:06.332626] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:20.572 [2024-12-17 00:35:06.332710] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.572 [2024-12-17 00:35:06.466501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.572 [2024-12-17 00:35:06.498416] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.572 [2024-12-17 00:35:06.498462] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.572 [2024-12-17 00:35:06.498471] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.572 [2024-12-17 00:35:06.498477] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.572 [2024-12-17 00:35:06.498483] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.572 [2024-12-17 00:35:06.502347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.572 [2024-12-17 00:35:06.502378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.572 [2024-12-17 00:35:06.529512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90724 00:19:20.831 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:21.089 [2024-12-17 00:35:06.933217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.089 00:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:21.348 Malloc0 00:19:21.348 00:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:21.606 00:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.864 00:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:22.122 [2024-12-17 00:35:07.985427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:22.122 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:22.387 [2024-12-17 00:35:08.209546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:22.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90771 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90771 /var/tmp/bdevperf.sock 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90771 ']' 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.387 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:22.656 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.656 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:22.656 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:22.914 00:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:23.172 Nvme0n1 00:19:23.172 00:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:23.429 Nvme0n1 00:19:23.687 00:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:23.687 00:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:25.588 00:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:25.588 00:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:25.846 00:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:26.105 00:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:27.093 00:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:27.093 00:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:27.093 00:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.093 00:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:27.353 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.353 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:27.353 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.353 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:27.613 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:27.613 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:27.613 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.613 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:27.871 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:27.871 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:27.871 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.871 00:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.129 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.129 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:28.129 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:28.129 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.388 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.388 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:28.388 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.388 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:28.646 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.646 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:28.646 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:28.905 00:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:29.163 00:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:30.097 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:30.097 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:30.097 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.097 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:30.355 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:30.355 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:30.355 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:30.355 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.614 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.614 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:30.614 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.614 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:30.872 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.872 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:30.872 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.872 00:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:31.130 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.130 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:31.130 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:31.130 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.389 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.389 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:31.389 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.389 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:31.647 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.647 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:31.647 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:31.906 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:32.164 00:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:33.100 00:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:33.100 00:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:33.100 00:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.100 00:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:33.358 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.358 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:33.358 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.358 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:33.617 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:33.617 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:33.617 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.617 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:33.875 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.875 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:33.875 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:33.875 00:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.134 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.134 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:34.134 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.134 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:34.392 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.392 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:34.392 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.393 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:34.651 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.651 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:34.651 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:34.910 00:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:35.168 00:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:36.104 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:36.104 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:36.104 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:36.104 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.671 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:36.930 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.930 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:36.930 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:36.930 00:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.188 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.188 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:37.188 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.188 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:37.447 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.447 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:37.447 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.447 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:37.705 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:37.705 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:37.705 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:37.963 00:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:38.222 00:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:39.157 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:39.157 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:39.157 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.157 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:39.415 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.415 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:39.415 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.415 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:39.673 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.673 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:39.673 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.673 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:39.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:39.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.930 00:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:40.188 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.188 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:40.188 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:40.188 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.446 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:40.446 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:40.446 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:40.446 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.704 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:40.704 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:40.704 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:40.963 00:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:41.221 00:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.593 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:42.852 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:42.852 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:42.852 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.852 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:43.110 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.110 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:43.110 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.110 00:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:43.369 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.369 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:43.369 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.369 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:43.627 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:43.627 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:43.627 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.627 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:43.886 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.886 00:35:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:44.144 00:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:44.145 00:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:44.445 00:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:44.704 00:35:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:45.639 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:45.639 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:45.639 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.639 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:45.898 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:45.898 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:45.898 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.898 00:35:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:46.156 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.156 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:46.156 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:46.156 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.415 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.415 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:46.415 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:46.415 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.674 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.674 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:46.674 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.674 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:46.933 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.933 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:46.933 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.933 00:35:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:47.191 00:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.191 00:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:47.191 00:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:47.450 00:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:47.708 00:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:48.643 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:48.643 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:48.643 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.643 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:48.903 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:48.903 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:48.903 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.903 00:35:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:49.163 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.163 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:49.163 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.163 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:49.423 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.423 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:49.423 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.423 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:49.686 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.686 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:49.686 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:49.686 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.945 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.945 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:49.945 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.945 00:35:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:50.203 00:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.203 00:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:50.203 00:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:50.462 00:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:50.720 00:35:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:51.655 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:51.655 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:51.655 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.655 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:51.914 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.914 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:51.914 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:51.914 00:35:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.172 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.172 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.172 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.172 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.430 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.430 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.430 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.430 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:52.688 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.688 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:52.688 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.688 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:52.947 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.947 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:52.947 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.947 00:35:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.205 00:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.205 00:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:53.205 00:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:53.463 00:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:53.720 00:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:54.654 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:54.654 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:54.655 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.655 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:54.912 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.912 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:54.912 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.912 00:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:55.171 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:55.171 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:55.171 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.171 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.429 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.429 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.429 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:55.429 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.687 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.687 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:55.687 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.687 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:55.945 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.945 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:55.945 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.945 00:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90771 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90771 ']' 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90771 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90771 00:19:56.203 killing process with pid 90771 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90771' 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90771 00:19:56.203 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90771 00:19:56.203 { 00:19:56.203 "results": [ 00:19:56.203 { 00:19:56.203 "job": "Nvme0n1", 00:19:56.203 "core_mask": "0x4", 00:19:56.203 "workload": "verify", 00:19:56.203 "status": "terminated", 00:19:56.203 "verify_range": { 00:19:56.203 "start": 0, 00:19:56.203 "length": 16384 00:19:56.203 }, 00:19:56.203 "queue_depth": 128, 00:19:56.203 "io_size": 4096, 00:19:56.203 "runtime": 32.566345, 00:19:56.203 "iops": 9262.322805951973, 00:19:56.203 "mibps": 36.180948460749896, 00:19:56.203 "io_failed": 0, 00:19:56.203 "io_timeout": 0, 00:19:56.203 "avg_latency_us": 13792.00976769418, 00:19:56.203 "min_latency_us": 904.8436363636364, 00:19:56.203 "max_latency_us": 4026531.84 00:19:56.203 } 00:19:56.203 ], 00:19:56.203 "core_count": 1 00:19:56.203 } 00:19:56.465 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90771 00:19:56.465 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:56.465 [2024-12-17 00:35:08.280415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:56.465 [2024-12-17 00:35:08.280518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90771 ] 00:19:56.465 [2024-12-17 00:35:08.411849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.465 [2024-12-17 00:35:08.448042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.465 [2024-12-17 00:35:08.476373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:56.465 [2024-12-17 00:35:09.403530] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:19:56.465 Running I/O for 90 seconds... 00:19:56.465 7957.00 IOPS, 31.08 MiB/s [2024-12-17T00:35:42.468Z] 8074.50 IOPS, 31.54 MiB/s [2024-12-17T00:35:42.468Z] 8028.33 IOPS, 31.36 MiB/s [2024-12-17T00:35:42.468Z] 8005.25 IOPS, 31.27 MiB/s [2024-12-17T00:35:42.468Z] 7965.80 IOPS, 31.12 MiB/s [2024-12-17T00:35:42.468Z] 7967.33 IOPS, 31.12 MiB/s [2024-12-17T00:35:42.468Z] 7965.00 IOPS, 31.11 MiB/s [2024-12-17T00:35:42.468Z] 7943.50 IOPS, 31.03 MiB/s [2024-12-17T00:35:42.468Z] 8203.78 IOPS, 32.05 MiB/s [2024-12-17T00:35:42.468Z] 8457.00 IOPS, 33.04 MiB/s [2024-12-17T00:35:42.468Z] 8637.82 IOPS, 33.74 MiB/s [2024-12-17T00:35:42.468Z] 8815.33 IOPS, 34.43 MiB/s [2024-12-17T00:35:42.468Z] 8974.15 IOPS, 35.06 MiB/s [2024-12-17T00:35:42.468Z] 9093.14 IOPS, 35.52 MiB/s [2024-12-17T00:35:42.468Z] [2024-12-17 00:35:23.813994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.465 [2024-12-17 00:35:23.814664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.465 [2024-12-17 00:35:23.814955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:56.465 [2024-12-17 00:35:23.814975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.814988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.466 [2024-12-17 00:35:23.815878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.815978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.815997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:56.466 [2024-12-17 00:35:23.816436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.466 [2024-12-17 00:35:23.816451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.816780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.816829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.816876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.816909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.816942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.816960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.816973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.467 [2024-12-17 00:35:23.817409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:56.467 [2024-12-17 00:35:23.817869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.467 [2024-12-17 00:35:23.817883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.817901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.817915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.817934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.817947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.817966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.817980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.818540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.818555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:23.819255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:23.819647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:23.819667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:56.468 8670.93 IOPS, 33.87 MiB/s [2024-12-17T00:35:42.471Z] 8129.00 IOPS, 31.75 MiB/s [2024-12-17T00:35:42.471Z] 7650.82 IOPS, 29.89 MiB/s [2024-12-17T00:35:42.471Z] 7225.78 IOPS, 28.23 MiB/s [2024-12-17T00:35:42.471Z] 7252.53 IOPS, 28.33 MiB/s [2024-12-17T00:35:42.471Z] 7419.45 IOPS, 28.98 MiB/s [2024-12-17T00:35:42.471Z] 7612.43 IOPS, 29.74 MiB/s [2024-12-17T00:35:42.471Z] 7914.09 IOPS, 30.91 MiB/s [2024-12-17T00:35:42.471Z] 8169.00 IOPS, 31.91 MiB/s [2024-12-17T00:35:42.471Z] 8381.08 IOPS, 32.74 MiB/s [2024-12-17T00:35:42.471Z] 8482.60 IOPS, 33.14 MiB/s [2024-12-17T00:35:42.471Z] 8561.31 IOPS, 33.44 MiB/s [2024-12-17T00:35:42.471Z] 8636.93 IOPS, 33.74 MiB/s [2024-12-17T00:35:42.471Z] 8844.79 IOPS, 34.55 MiB/s [2024-12-17T00:35:42.471Z] 9026.41 IOPS, 35.26 MiB/s [2024-12-17T00:35:42.471Z] [2024-12-17 00:35:39.541273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:39.541368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:39.541452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:39.541486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.468 [2024-12-17 00:35:39.541517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:39.541548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:39.541578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:39.541608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:39.541639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:56.468 [2024-12-17 00:35:39.541656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.468 [2024-12-17 00:35:39.541669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.541699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.541729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.541760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.541790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.541834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.541864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.541895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.541925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.541959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.541977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.541990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.469 [2024-12-17 00:35:39.542711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.469 [2024-12-17 00:35:39.542742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:56.469 [2024-12-17 00:35:39.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.542773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.542791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.542804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.542822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.542834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.542853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.542866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.542884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.542897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.542915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.542928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.544660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.544718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.544764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.544800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.544848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.544879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.544910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.544941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.544972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.544990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.545002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.545033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.470 [2024-12-17 00:35:39.545082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.545113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.545145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.545178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.545219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.545251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:56.470 [2024-12-17 00:35:39.545271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:56.470 [2024-12-17 00:35:39.545284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:56.470 9144.53 IOPS, 35.72 MiB/s [2024-12-17T00:35:42.473Z] 9198.97 IOPS, 35.93 MiB/s [2024-12-17T00:35:42.473Z] 9240.00 IOPS, 36.09 MiB/s [2024-12-17T00:35:42.473Z] Received shutdown signal, test time was about 32.567097 seconds 00:19:56.470 00:19:56.470 Latency(us) 00:19:56.470 [2024-12-17T00:35:42.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.470 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.470 Verification LBA range: start 0x0 length 0x4000 00:19:56.470 Nvme0n1 : 32.57 9262.32 36.18 0.00 0.00 13792.01 904.84 4026531.84 00:19:56.470 [2024-12-17T00:35:42.473Z] =================================================================================================================== 00:19:56.470 [2024-12-17T00:35:42.473Z] Total : 9262.32 36.18 0.00 0.00 13792.01 904.84 4026531.84 00:19:56.470 [2024-12-17 00:35:42.112207] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:19:56.470 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.728 rmmod nvme_tcp 00:19:56.728 rmmod nvme_fabrics 00:19:56.728 rmmod nvme_keyring 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 90724 ']' 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 90724 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90724 ']' 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90724 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90724 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.728 killing process with pid 90724 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90724' 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90724 00:19:56.728 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90724 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.987 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.245 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:57.245 ************************************ 00:19:57.245 END TEST nvmf_host_multipath_status 00:19:57.245 ************************************ 00:19:57.245 00:19:57.245 real 0m37.368s 00:19:57.245 user 2m0.585s 00:19:57.245 sys 0m11.065s 00:19:57.245 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:57.245 00:35:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.245 ************************************ 00:19:57.245 START TEST nvmf_discovery_remove_ifc 00:19:57.245 ************************************ 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:57.245 * Looking for test storage... 00:19:57.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:57.245 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:57.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.246 --rc genhtml_branch_coverage=1 00:19:57.246 --rc genhtml_function_coverage=1 00:19:57.246 --rc genhtml_legend=1 00:19:57.246 --rc geninfo_all_blocks=1 00:19:57.246 --rc geninfo_unexecuted_blocks=1 00:19:57.246 00:19:57.246 ' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:57.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.246 --rc genhtml_branch_coverage=1 00:19:57.246 --rc genhtml_function_coverage=1 00:19:57.246 --rc genhtml_legend=1 00:19:57.246 --rc geninfo_all_blocks=1 00:19:57.246 --rc geninfo_unexecuted_blocks=1 00:19:57.246 00:19:57.246 ' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:57.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.246 --rc genhtml_branch_coverage=1 00:19:57.246 --rc genhtml_function_coverage=1 00:19:57.246 --rc genhtml_legend=1 00:19:57.246 --rc geninfo_all_blocks=1 00:19:57.246 --rc geninfo_unexecuted_blocks=1 00:19:57.246 00:19:57.246 ' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:57.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.246 --rc genhtml_branch_coverage=1 00:19:57.246 --rc genhtml_function_coverage=1 00:19:57.246 --rc genhtml_legend=1 00:19:57.246 --rc geninfo_all_blocks=1 00:19:57.246 --rc geninfo_unexecuted_blocks=1 00:19:57.246 00:19:57.246 ' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.246 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.246 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.505 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:57.506 Cannot find device "nvmf_init_br" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:57.506 Cannot find device "nvmf_init_br2" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:57.506 Cannot find device "nvmf_tgt_br" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.506 Cannot find device "nvmf_tgt_br2" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:57.506 Cannot find device "nvmf_init_br" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:57.506 Cannot find device "nvmf_init_br2" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:57.506 Cannot find device "nvmf_tgt_br" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:57.506 Cannot find device "nvmf_tgt_br2" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:57.506 Cannot find device "nvmf_br" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:57.506 Cannot find device "nvmf_init_if" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:57.506 Cannot find device "nvmf_init_if2" 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:57.506 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.765 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:57.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:57.766 00:19:57.766 --- 10.0.0.3 ping statistics --- 00:19:57.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.766 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:57.766 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:57.766 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:19:57.766 00:19:57.766 --- 10.0.0.4 ping statistics --- 00:19:57.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.766 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:57.766 00:19:57.766 --- 10.0.0.1 ping statistics --- 00:19:57.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.766 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:57.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:19:57.766 00:19:57.766 --- 10.0.0.2 ping statistics --- 00:19:57.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.766 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=91586 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 91586 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91586 ']' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.766 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:57.766 [2024-12-17 00:35:43.733999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:57.766 [2024-12-17 00:35:43.734089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.025 [2024-12-17 00:35:43.867879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.025 [2024-12-17 00:35:43.899924] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.025 [2024-12-17 00:35:43.899970] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.025 [2024-12-17 00:35:43.899996] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.025 [2024-12-17 00:35:43.900002] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.025 [2024-12-17 00:35:43.900008] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.025 [2024-12-17 00:35:43.900036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.025 [2024-12-17 00:35:43.926509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:58.025 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.025 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:19:58.025 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:58.025 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.025 00:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:58.284 [2024-12-17 00:35:44.044695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.284 [2024-12-17 00:35:44.052807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:58.284 null0 00:19:58.284 [2024-12-17 00:35:44.084725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91610 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91610 /tmp/host.sock 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91610 ']' 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:58.284 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.284 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:58.284 [2024-12-17 00:35:44.152893] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:58.284 [2024-12-17 00:35:44.153001] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91610 ] 00:19:58.284 [2024-12-17 00:35:44.287502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.542 [2024-12-17 00:35:44.329479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.542 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.542 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:58.543 [2024-12-17 00:35:44.461922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.543 00:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.917 [2024-12-17 00:35:45.497469] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:59.917 [2024-12-17 00:35:45.497517] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:59.917 [2024-12-17 00:35:45.497535] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:59.917 [2024-12-17 00:35:45.503512] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:59.917 [2024-12-17 00:35:45.559834] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:59.917 [2024-12-17 00:35:45.559904] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:59.917 [2024-12-17 00:35:45.559929] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:59.917 [2024-12-17 00:35:45.559943] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:59.917 [2024-12-17 00:35:45.559962] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.917 [2024-12-17 00:35:45.566255] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6e46f0 was disconnected and freed. delete nvme_qpair. 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:59.917 00:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:00.853 00:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:01.788 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.046 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:02.046 00:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:02.981 00:35:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.916 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:03.917 00:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:05.336 00:35:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:05.336 [2024-12-17 00:35:50.988375] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:05.336 [2024-12-17 00:35:50.988460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.336 [2024-12-17 00:35:50.988476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.336 [2024-12-17 00:35:50.988488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.336 [2024-12-17 00:35:50.988497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.336 [2024-12-17 00:35:50.988508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.336 [2024-12-17 00:35:50.988519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.336 [2024-12-17 00:35:50.988529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.336 [2024-12-17 00:35:50.988537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.336 [2024-12-17 00:35:50.988556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.337 [2024-12-17 00:35:50.988582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.337 [2024-12-17 00:35:50.988593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bfc40 is same with the state(6) to be set 00:20:05.337 [2024-12-17 00:35:50.998365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bfc40 (9): Bad file descriptor 00:20:05.337 [2024-12-17 00:35:51.008383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.272 00:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.272 00:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.272 00:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.272 00:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.272 00:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.272 00:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.272 00:35:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.272 [2024-12-17 00:35:52.032449] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:06.272 [2024-12-17 00:35:52.032568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bfc40 with addr=10.0.0.3, port=4420 00:20:06.272 [2024-12-17 00:35:52.032601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6bfc40 is same with the state(6) to be set 00:20:06.272 [2024-12-17 00:35:52.032651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bfc40 (9): Bad file descriptor 00:20:06.272 [2024-12-17 00:35:52.033535] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.272 [2024-12-17 00:35:52.033616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.273 [2024-12-17 00:35:52.033640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.273 [2024-12-17 00:35:52.033664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.273 [2024-12-17 00:35:52.033703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.273 [2024-12-17 00:35:52.033727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.273 00:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.273 00:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:06.273 00:35:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.208 [2024-12-17 00:35:53.033784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.208 [2024-12-17 00:35:53.033838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.209 [2024-12-17 00:35:53.033864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.209 [2024-12-17 00:35:53.033872] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:07.209 [2024-12-17 00:35:53.033891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.209 [2024-12-17 00:35:53.033916] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:07.209 [2024-12-17 00:35:53.033948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.209 [2024-12-17 00:35:53.033962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.209 [2024-12-17 00:35:53.033973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.209 [2024-12-17 00:35:53.033981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.209 [2024-12-17 00:35:53.033990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.209 [2024-12-17 00:35:53.033998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.209 [2024-12-17 00:35:53.034006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.209 [2024-12-17 00:35:53.034014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.209 [2024-12-17 00:35:53.034023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.209 [2024-12-17 00:35:53.034046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.209 [2024-12-17 00:35:53.034070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:07.209 [2024-12-17 00:35:53.034671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ae180 (9): Bad file descriptor 00:20:07.209 [2024-12-17 00:35:53.035682] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:07.209 [2024-12-17 00:35:53.035717] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:07.209 00:35:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:08.584 00:35:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:09.153 [2024-12-17 00:35:55.041684] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:09.153 [2024-12-17 00:35:55.041709] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:09.153 [2024-12-17 00:35:55.041741] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:09.153 [2024-12-17 00:35:55.047734] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:09.153 [2024-12-17 00:35:55.103532] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:09.153 [2024-12-17 00:35:55.103577] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:09.153 [2024-12-17 00:35:55.103604] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:09.153 [2024-12-17 00:35:55.103618] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:09.153 [2024-12-17 00:35:55.103626] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:09.153 [2024-12-17 00:35:55.110028] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6f3af0 was disconnected and freed. delete nvme_qpair. 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91610 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91610 ']' 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91610 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91610 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91610' 00:20:09.412 killing process with pid 91610 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91610 00:20:09.412 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91610 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.671 rmmod nvme_tcp 00:20:09.671 rmmod nvme_fabrics 00:20:09.671 rmmod nvme_keyring 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 91586 ']' 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 91586 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91586 ']' 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91586 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91586 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.671 killing process with pid 91586 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91586' 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91586 00:20:09.671 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91586 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:09.930 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:10.189 00:20:10.189 real 0m12.935s 00:20:10.189 user 0m22.061s 00:20:10.189 sys 0m2.430s 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:10.189 00:35:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.189 ************************************ 00:20:10.189 END TEST nvmf_discovery_remove_ifc 00:20:10.189 ************************************ 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.189 ************************************ 00:20:10.189 START TEST nvmf_identify_kernel_target 00:20:10.189 ************************************ 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:10.189 * Looking for test storage... 00:20:10.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:10.189 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:10.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.449 --rc genhtml_branch_coverage=1 00:20:10.449 --rc genhtml_function_coverage=1 00:20:10.449 --rc genhtml_legend=1 00:20:10.449 --rc geninfo_all_blocks=1 00:20:10.449 --rc geninfo_unexecuted_blocks=1 00:20:10.449 00:20:10.449 ' 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:10.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.449 --rc genhtml_branch_coverage=1 00:20:10.449 --rc genhtml_function_coverage=1 00:20:10.449 --rc genhtml_legend=1 00:20:10.449 --rc geninfo_all_blocks=1 00:20:10.449 --rc geninfo_unexecuted_blocks=1 00:20:10.449 00:20:10.449 ' 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:10.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.449 --rc genhtml_branch_coverage=1 00:20:10.449 --rc genhtml_function_coverage=1 00:20:10.449 --rc genhtml_legend=1 00:20:10.449 --rc geninfo_all_blocks=1 00:20:10.449 --rc geninfo_unexecuted_blocks=1 00:20:10.449 00:20:10.449 ' 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:10.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.449 --rc genhtml_branch_coverage=1 00:20:10.449 --rc genhtml_function_coverage=1 00:20:10.449 --rc genhtml_legend=1 00:20:10.449 --rc geninfo_all_blocks=1 00:20:10.449 --rc geninfo_unexecuted_blocks=1 00:20:10.449 00:20:10.449 ' 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:20:10.449 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:10.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:10.450 Cannot find device "nvmf_init_br" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:10.450 Cannot find device "nvmf_init_br2" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:10.450 Cannot find device "nvmf_tgt_br" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.450 Cannot find device "nvmf_tgt_br2" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:10.450 Cannot find device "nvmf_init_br" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:10.450 Cannot find device "nvmf_init_br2" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:10.450 Cannot find device "nvmf_tgt_br" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:10.450 Cannot find device "nvmf_tgt_br2" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:10.450 Cannot find device "nvmf_br" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:10.450 Cannot find device "nvmf_init_if" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:10.450 Cannot find device "nvmf_init_if2" 00:20:10.450 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:10.451 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:10.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:10.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:10.710 00:20:10.710 --- 10.0.0.3 ping statistics --- 00:20:10.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.710 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:10.710 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:10.710 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:20:10.710 00:20:10.710 --- 10.0.0.4 ping statistics --- 00:20:10.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.710 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:10.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:10.710 00:20:10.710 --- 10.0.0.1 ping statistics --- 00:20:10.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.710 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:10.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:20:10.710 00:20:10.710 --- 10.0.0.2 ping statistics --- 00:20:10.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.710 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:10.710 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:10.970 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:10.970 00:35:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:11.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:11.229 Waiting for block devices as requested 00:20:11.229 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:11.229 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:11.487 No valid GPT data, bailing 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:11.487 No valid GPT data, bailing 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:11.487 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:11.487 No valid GPT data, bailing 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:11.745 No valid GPT data, bailing 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:11.745 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -a 10.0.0.1 -t tcp -s 4420 00:20:11.746 00:20:11.746 Discovery Log Number of Records 2, Generation counter 2 00:20:11.746 =====Discovery Log Entry 0====== 00:20:11.746 trtype: tcp 00:20:11.746 adrfam: ipv4 00:20:11.746 subtype: current discovery subsystem 00:20:11.746 treq: not specified, sq flow control disable supported 00:20:11.746 portid: 1 00:20:11.746 trsvcid: 4420 00:20:11.746 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:11.746 traddr: 10.0.0.1 00:20:11.746 eflags: none 00:20:11.746 sectype: none 00:20:11.746 =====Discovery Log Entry 1====== 00:20:11.746 trtype: tcp 00:20:11.746 adrfam: ipv4 00:20:11.746 subtype: nvme subsystem 00:20:11.746 treq: not specified, sq flow control disable supported 00:20:11.746 portid: 1 00:20:11.746 trsvcid: 4420 00:20:11.746 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:11.746 traddr: 10.0.0.1 00:20:11.746 eflags: none 00:20:11.746 sectype: none 00:20:11.746 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:11.746 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:12.006 ===================================================== 00:20:12.006 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:12.006 ===================================================== 00:20:12.006 Controller Capabilities/Features 00:20:12.006 ================================ 00:20:12.006 Vendor ID: 0000 00:20:12.006 Subsystem Vendor ID: 0000 00:20:12.006 Serial Number: c121c2782d5f13d81a8c 00:20:12.006 Model Number: Linux 00:20:12.006 Firmware Version: 6.8.9-20 00:20:12.006 Recommended Arb Burst: 0 00:20:12.006 IEEE OUI Identifier: 00 00 00 00:20:12.006 Multi-path I/O 00:20:12.006 May have multiple subsystem ports: No 00:20:12.006 May have multiple controllers: No 00:20:12.006 Associated with SR-IOV VF: No 00:20:12.006 Max Data Transfer Size: Unlimited 00:20:12.006 Max Number of Namespaces: 0 00:20:12.006 Max Number of I/O Queues: 1024 00:20:12.006 NVMe Specification Version (VS): 1.3 00:20:12.006 NVMe Specification Version (Identify): 1.3 00:20:12.006 Maximum Queue Entries: 1024 00:20:12.006 Contiguous Queues Required: No 00:20:12.006 Arbitration Mechanisms Supported 00:20:12.006 Weighted Round Robin: Not Supported 00:20:12.006 Vendor Specific: Not Supported 00:20:12.006 Reset Timeout: 7500 ms 00:20:12.006 Doorbell Stride: 4 bytes 00:20:12.006 NVM Subsystem Reset: Not Supported 00:20:12.006 Command Sets Supported 00:20:12.006 NVM Command Set: Supported 00:20:12.006 Boot Partition: Not Supported 00:20:12.006 Memory Page Size Minimum: 4096 bytes 00:20:12.006 Memory Page Size Maximum: 4096 bytes 00:20:12.006 Persistent Memory Region: Not Supported 00:20:12.006 Optional Asynchronous Events Supported 00:20:12.006 Namespace Attribute Notices: Not Supported 00:20:12.006 Firmware Activation Notices: Not Supported 00:20:12.006 ANA Change Notices: Not Supported 00:20:12.006 PLE Aggregate Log Change Notices: Not Supported 00:20:12.006 LBA Status Info Alert Notices: Not Supported 00:20:12.006 EGE Aggregate Log Change Notices: Not Supported 00:20:12.006 Normal NVM Subsystem Shutdown event: Not Supported 00:20:12.006 Zone Descriptor Change Notices: Not Supported 00:20:12.006 Discovery Log Change Notices: Supported 00:20:12.006 Controller Attributes 00:20:12.006 128-bit Host Identifier: Not Supported 00:20:12.006 Non-Operational Permissive Mode: Not Supported 00:20:12.006 NVM Sets: Not Supported 00:20:12.006 Read Recovery Levels: Not Supported 00:20:12.006 Endurance Groups: Not Supported 00:20:12.006 Predictable Latency Mode: Not Supported 00:20:12.006 Traffic Based Keep ALive: Not Supported 00:20:12.006 Namespace Granularity: Not Supported 00:20:12.006 SQ Associations: Not Supported 00:20:12.006 UUID List: Not Supported 00:20:12.006 Multi-Domain Subsystem: Not Supported 00:20:12.006 Fixed Capacity Management: Not Supported 00:20:12.006 Variable Capacity Management: Not Supported 00:20:12.006 Delete Endurance Group: Not Supported 00:20:12.006 Delete NVM Set: Not Supported 00:20:12.006 Extended LBA Formats Supported: Not Supported 00:20:12.006 Flexible Data Placement Supported: Not Supported 00:20:12.006 00:20:12.006 Controller Memory Buffer Support 00:20:12.006 ================================ 00:20:12.006 Supported: No 00:20:12.006 00:20:12.006 Persistent Memory Region Support 00:20:12.006 ================================ 00:20:12.006 Supported: No 00:20:12.006 00:20:12.006 Admin Command Set Attributes 00:20:12.006 ============================ 00:20:12.006 Security Send/Receive: Not Supported 00:20:12.006 Format NVM: Not Supported 00:20:12.006 Firmware Activate/Download: Not Supported 00:20:12.006 Namespace Management: Not Supported 00:20:12.006 Device Self-Test: Not Supported 00:20:12.006 Directives: Not Supported 00:20:12.006 NVMe-MI: Not Supported 00:20:12.006 Virtualization Management: Not Supported 00:20:12.006 Doorbell Buffer Config: Not Supported 00:20:12.006 Get LBA Status Capability: Not Supported 00:20:12.006 Command & Feature Lockdown Capability: Not Supported 00:20:12.006 Abort Command Limit: 1 00:20:12.006 Async Event Request Limit: 1 00:20:12.006 Number of Firmware Slots: N/A 00:20:12.006 Firmware Slot 1 Read-Only: N/A 00:20:12.006 Firmware Activation Without Reset: N/A 00:20:12.006 Multiple Update Detection Support: N/A 00:20:12.006 Firmware Update Granularity: No Information Provided 00:20:12.006 Per-Namespace SMART Log: No 00:20:12.006 Asymmetric Namespace Access Log Page: Not Supported 00:20:12.006 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:12.006 Command Effects Log Page: Not Supported 00:20:12.006 Get Log Page Extended Data: Supported 00:20:12.006 Telemetry Log Pages: Not Supported 00:20:12.006 Persistent Event Log Pages: Not Supported 00:20:12.006 Supported Log Pages Log Page: May Support 00:20:12.006 Commands Supported & Effects Log Page: Not Supported 00:20:12.006 Feature Identifiers & Effects Log Page:May Support 00:20:12.006 NVMe-MI Commands & Effects Log Page: May Support 00:20:12.006 Data Area 4 for Telemetry Log: Not Supported 00:20:12.006 Error Log Page Entries Supported: 1 00:20:12.006 Keep Alive: Not Supported 00:20:12.006 00:20:12.006 NVM Command Set Attributes 00:20:12.006 ========================== 00:20:12.006 Submission Queue Entry Size 00:20:12.006 Max: 1 00:20:12.006 Min: 1 00:20:12.006 Completion Queue Entry Size 00:20:12.007 Max: 1 00:20:12.007 Min: 1 00:20:12.007 Number of Namespaces: 0 00:20:12.007 Compare Command: Not Supported 00:20:12.007 Write Uncorrectable Command: Not Supported 00:20:12.007 Dataset Management Command: Not Supported 00:20:12.007 Write Zeroes Command: Not Supported 00:20:12.007 Set Features Save Field: Not Supported 00:20:12.007 Reservations: Not Supported 00:20:12.007 Timestamp: Not Supported 00:20:12.007 Copy: Not Supported 00:20:12.007 Volatile Write Cache: Not Present 00:20:12.007 Atomic Write Unit (Normal): 1 00:20:12.007 Atomic Write Unit (PFail): 1 00:20:12.007 Atomic Compare & Write Unit: 1 00:20:12.007 Fused Compare & Write: Not Supported 00:20:12.007 Scatter-Gather List 00:20:12.007 SGL Command Set: Supported 00:20:12.007 SGL Keyed: Not Supported 00:20:12.007 SGL Bit Bucket Descriptor: Not Supported 00:20:12.007 SGL Metadata Pointer: Not Supported 00:20:12.007 Oversized SGL: Not Supported 00:20:12.007 SGL Metadata Address: Not Supported 00:20:12.007 SGL Offset: Supported 00:20:12.007 Transport SGL Data Block: Not Supported 00:20:12.007 Replay Protected Memory Block: Not Supported 00:20:12.007 00:20:12.007 Firmware Slot Information 00:20:12.007 ========================= 00:20:12.007 Active slot: 0 00:20:12.007 00:20:12.007 00:20:12.007 Error Log 00:20:12.007 ========= 00:20:12.007 00:20:12.007 Active Namespaces 00:20:12.007 ================= 00:20:12.007 Discovery Log Page 00:20:12.007 ================== 00:20:12.007 Generation Counter: 2 00:20:12.007 Number of Records: 2 00:20:12.007 Record Format: 0 00:20:12.007 00:20:12.007 Discovery Log Entry 0 00:20:12.007 ---------------------- 00:20:12.007 Transport Type: 3 (TCP) 00:20:12.007 Address Family: 1 (IPv4) 00:20:12.007 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:12.007 Entry Flags: 00:20:12.007 Duplicate Returned Information: 0 00:20:12.007 Explicit Persistent Connection Support for Discovery: 0 00:20:12.007 Transport Requirements: 00:20:12.007 Secure Channel: Not Specified 00:20:12.007 Port ID: 1 (0x0001) 00:20:12.007 Controller ID: 65535 (0xffff) 00:20:12.007 Admin Max SQ Size: 32 00:20:12.007 Transport Service Identifier: 4420 00:20:12.007 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:12.007 Transport Address: 10.0.0.1 00:20:12.007 Discovery Log Entry 1 00:20:12.007 ---------------------- 00:20:12.007 Transport Type: 3 (TCP) 00:20:12.007 Address Family: 1 (IPv4) 00:20:12.007 Subsystem Type: 2 (NVM Subsystem) 00:20:12.007 Entry Flags: 00:20:12.007 Duplicate Returned Information: 0 00:20:12.007 Explicit Persistent Connection Support for Discovery: 0 00:20:12.007 Transport Requirements: 00:20:12.007 Secure Channel: Not Specified 00:20:12.007 Port ID: 1 (0x0001) 00:20:12.007 Controller ID: 65535 (0xffff) 00:20:12.007 Admin Max SQ Size: 32 00:20:12.007 Transport Service Identifier: 4420 00:20:12.007 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:12.007 Transport Address: 10.0.0.1 00:20:12.007 00:35:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:12.007 get_feature(0x01) failed 00:20:12.007 get_feature(0x02) failed 00:20:12.007 get_feature(0x04) failed 00:20:12.007 ===================================================== 00:20:12.007 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:12.007 ===================================================== 00:20:12.007 Controller Capabilities/Features 00:20:12.007 ================================ 00:20:12.007 Vendor ID: 0000 00:20:12.007 Subsystem Vendor ID: 0000 00:20:12.007 Serial Number: 13cd866c865c053c5279 00:20:12.007 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:12.007 Firmware Version: 6.8.9-20 00:20:12.007 Recommended Arb Burst: 6 00:20:12.007 IEEE OUI Identifier: 00 00 00 00:20:12.007 Multi-path I/O 00:20:12.007 May have multiple subsystem ports: Yes 00:20:12.007 May have multiple controllers: Yes 00:20:12.007 Associated with SR-IOV VF: No 00:20:12.007 Max Data Transfer Size: Unlimited 00:20:12.007 Max Number of Namespaces: 1024 00:20:12.007 Max Number of I/O Queues: 128 00:20:12.007 NVMe Specification Version (VS): 1.3 00:20:12.007 NVMe Specification Version (Identify): 1.3 00:20:12.007 Maximum Queue Entries: 1024 00:20:12.007 Contiguous Queues Required: No 00:20:12.007 Arbitration Mechanisms Supported 00:20:12.007 Weighted Round Robin: Not Supported 00:20:12.007 Vendor Specific: Not Supported 00:20:12.007 Reset Timeout: 7500 ms 00:20:12.007 Doorbell Stride: 4 bytes 00:20:12.007 NVM Subsystem Reset: Not Supported 00:20:12.007 Command Sets Supported 00:20:12.007 NVM Command Set: Supported 00:20:12.007 Boot Partition: Not Supported 00:20:12.007 Memory Page Size Minimum: 4096 bytes 00:20:12.007 Memory Page Size Maximum: 4096 bytes 00:20:12.007 Persistent Memory Region: Not Supported 00:20:12.007 Optional Asynchronous Events Supported 00:20:12.007 Namespace Attribute Notices: Supported 00:20:12.007 Firmware Activation Notices: Not Supported 00:20:12.007 ANA Change Notices: Supported 00:20:12.007 PLE Aggregate Log Change Notices: Not Supported 00:20:12.007 LBA Status Info Alert Notices: Not Supported 00:20:12.007 EGE Aggregate Log Change Notices: Not Supported 00:20:12.007 Normal NVM Subsystem Shutdown event: Not Supported 00:20:12.007 Zone Descriptor Change Notices: Not Supported 00:20:12.007 Discovery Log Change Notices: Not Supported 00:20:12.007 Controller Attributes 00:20:12.007 128-bit Host Identifier: Supported 00:20:12.007 Non-Operational Permissive Mode: Not Supported 00:20:12.007 NVM Sets: Not Supported 00:20:12.007 Read Recovery Levels: Not Supported 00:20:12.007 Endurance Groups: Not Supported 00:20:12.007 Predictable Latency Mode: Not Supported 00:20:12.007 Traffic Based Keep ALive: Supported 00:20:12.007 Namespace Granularity: Not Supported 00:20:12.007 SQ Associations: Not Supported 00:20:12.007 UUID List: Not Supported 00:20:12.007 Multi-Domain Subsystem: Not Supported 00:20:12.007 Fixed Capacity Management: Not Supported 00:20:12.007 Variable Capacity Management: Not Supported 00:20:12.007 Delete Endurance Group: Not Supported 00:20:12.007 Delete NVM Set: Not Supported 00:20:12.007 Extended LBA Formats Supported: Not Supported 00:20:12.007 Flexible Data Placement Supported: Not Supported 00:20:12.007 00:20:12.007 Controller Memory Buffer Support 00:20:12.007 ================================ 00:20:12.007 Supported: No 00:20:12.007 00:20:12.007 Persistent Memory Region Support 00:20:12.007 ================================ 00:20:12.007 Supported: No 00:20:12.007 00:20:12.007 Admin Command Set Attributes 00:20:12.007 ============================ 00:20:12.007 Security Send/Receive: Not Supported 00:20:12.007 Format NVM: Not Supported 00:20:12.007 Firmware Activate/Download: Not Supported 00:20:12.007 Namespace Management: Not Supported 00:20:12.007 Device Self-Test: Not Supported 00:20:12.007 Directives: Not Supported 00:20:12.007 NVMe-MI: Not Supported 00:20:12.007 Virtualization Management: Not Supported 00:20:12.007 Doorbell Buffer Config: Not Supported 00:20:12.007 Get LBA Status Capability: Not Supported 00:20:12.007 Command & Feature Lockdown Capability: Not Supported 00:20:12.007 Abort Command Limit: 4 00:20:12.007 Async Event Request Limit: 4 00:20:12.007 Number of Firmware Slots: N/A 00:20:12.007 Firmware Slot 1 Read-Only: N/A 00:20:12.007 Firmware Activation Without Reset: N/A 00:20:12.007 Multiple Update Detection Support: N/A 00:20:12.007 Firmware Update Granularity: No Information Provided 00:20:12.007 Per-Namespace SMART Log: Yes 00:20:12.007 Asymmetric Namespace Access Log Page: Supported 00:20:12.007 ANA Transition Time : 10 sec 00:20:12.007 00:20:12.007 Asymmetric Namespace Access Capabilities 00:20:12.007 ANA Optimized State : Supported 00:20:12.007 ANA Non-Optimized State : Supported 00:20:12.007 ANA Inaccessible State : Supported 00:20:12.007 ANA Persistent Loss State : Supported 00:20:12.007 ANA Change State : Supported 00:20:12.007 ANAGRPID is not changed : No 00:20:12.007 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:12.007 00:20:12.007 ANA Group Identifier Maximum : 128 00:20:12.007 Number of ANA Group Identifiers : 128 00:20:12.007 Max Number of Allowed Namespaces : 1024 00:20:12.007 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:12.007 Command Effects Log Page: Supported 00:20:12.007 Get Log Page Extended Data: Supported 00:20:12.007 Telemetry Log Pages: Not Supported 00:20:12.007 Persistent Event Log Pages: Not Supported 00:20:12.007 Supported Log Pages Log Page: May Support 00:20:12.007 Commands Supported & Effects Log Page: Not Supported 00:20:12.007 Feature Identifiers & Effects Log Page:May Support 00:20:12.007 NVMe-MI Commands & Effects Log Page: May Support 00:20:12.007 Data Area 4 for Telemetry Log: Not Supported 00:20:12.007 Error Log Page Entries Supported: 128 00:20:12.008 Keep Alive: Supported 00:20:12.008 Keep Alive Granularity: 1000 ms 00:20:12.008 00:20:12.008 NVM Command Set Attributes 00:20:12.008 ========================== 00:20:12.008 Submission Queue Entry Size 00:20:12.008 Max: 64 00:20:12.008 Min: 64 00:20:12.008 Completion Queue Entry Size 00:20:12.008 Max: 16 00:20:12.008 Min: 16 00:20:12.008 Number of Namespaces: 1024 00:20:12.008 Compare Command: Not Supported 00:20:12.008 Write Uncorrectable Command: Not Supported 00:20:12.008 Dataset Management Command: Supported 00:20:12.008 Write Zeroes Command: Supported 00:20:12.008 Set Features Save Field: Not Supported 00:20:12.008 Reservations: Not Supported 00:20:12.008 Timestamp: Not Supported 00:20:12.008 Copy: Not Supported 00:20:12.008 Volatile Write Cache: Present 00:20:12.008 Atomic Write Unit (Normal): 1 00:20:12.008 Atomic Write Unit (PFail): 1 00:20:12.008 Atomic Compare & Write Unit: 1 00:20:12.008 Fused Compare & Write: Not Supported 00:20:12.008 Scatter-Gather List 00:20:12.008 SGL Command Set: Supported 00:20:12.008 SGL Keyed: Not Supported 00:20:12.008 SGL Bit Bucket Descriptor: Not Supported 00:20:12.008 SGL Metadata Pointer: Not Supported 00:20:12.008 Oversized SGL: Not Supported 00:20:12.008 SGL Metadata Address: Not Supported 00:20:12.008 SGL Offset: Supported 00:20:12.008 Transport SGL Data Block: Not Supported 00:20:12.008 Replay Protected Memory Block: Not Supported 00:20:12.008 00:20:12.008 Firmware Slot Information 00:20:12.008 ========================= 00:20:12.008 Active slot: 0 00:20:12.008 00:20:12.008 Asymmetric Namespace Access 00:20:12.008 =========================== 00:20:12.008 Change Count : 0 00:20:12.008 Number of ANA Group Descriptors : 1 00:20:12.008 ANA Group Descriptor : 0 00:20:12.008 ANA Group ID : 1 00:20:12.008 Number of NSID Values : 1 00:20:12.008 Change Count : 0 00:20:12.008 ANA State : 1 00:20:12.008 Namespace Identifier : 1 00:20:12.008 00:20:12.008 Commands Supported and Effects 00:20:12.008 ============================== 00:20:12.008 Admin Commands 00:20:12.008 -------------- 00:20:12.008 Get Log Page (02h): Supported 00:20:12.008 Identify (06h): Supported 00:20:12.008 Abort (08h): Supported 00:20:12.008 Set Features (09h): Supported 00:20:12.008 Get Features (0Ah): Supported 00:20:12.008 Asynchronous Event Request (0Ch): Supported 00:20:12.008 Keep Alive (18h): Supported 00:20:12.008 I/O Commands 00:20:12.008 ------------ 00:20:12.008 Flush (00h): Supported 00:20:12.008 Write (01h): Supported LBA-Change 00:20:12.008 Read (02h): Supported 00:20:12.008 Write Zeroes (08h): Supported LBA-Change 00:20:12.008 Dataset Management (09h): Supported 00:20:12.008 00:20:12.008 Error Log 00:20:12.008 ========= 00:20:12.008 Entry: 0 00:20:12.008 Error Count: 0x3 00:20:12.008 Submission Queue Id: 0x0 00:20:12.008 Command Id: 0x5 00:20:12.008 Phase Bit: 0 00:20:12.008 Status Code: 0x2 00:20:12.008 Status Code Type: 0x0 00:20:12.008 Do Not Retry: 1 00:20:12.008 Error Location: 0x28 00:20:12.008 LBA: 0x0 00:20:12.008 Namespace: 0x0 00:20:12.008 Vendor Log Page: 0x0 00:20:12.008 ----------- 00:20:12.008 Entry: 1 00:20:12.008 Error Count: 0x2 00:20:12.008 Submission Queue Id: 0x0 00:20:12.008 Command Id: 0x5 00:20:12.008 Phase Bit: 0 00:20:12.008 Status Code: 0x2 00:20:12.008 Status Code Type: 0x0 00:20:12.008 Do Not Retry: 1 00:20:12.008 Error Location: 0x28 00:20:12.008 LBA: 0x0 00:20:12.008 Namespace: 0x0 00:20:12.008 Vendor Log Page: 0x0 00:20:12.008 ----------- 00:20:12.008 Entry: 2 00:20:12.008 Error Count: 0x1 00:20:12.008 Submission Queue Id: 0x0 00:20:12.008 Command Id: 0x4 00:20:12.008 Phase Bit: 0 00:20:12.008 Status Code: 0x2 00:20:12.008 Status Code Type: 0x0 00:20:12.008 Do Not Retry: 1 00:20:12.008 Error Location: 0x28 00:20:12.008 LBA: 0x0 00:20:12.008 Namespace: 0x0 00:20:12.008 Vendor Log Page: 0x0 00:20:12.008 00:20:12.008 Number of Queues 00:20:12.008 ================ 00:20:12.008 Number of I/O Submission Queues: 128 00:20:12.008 Number of I/O Completion Queues: 128 00:20:12.008 00:20:12.008 ZNS Specific Controller Data 00:20:12.008 ============================ 00:20:12.008 Zone Append Size Limit: 0 00:20:12.008 00:20:12.008 00:20:12.008 Active Namespaces 00:20:12.008 ================= 00:20:12.008 get_feature(0x05) failed 00:20:12.008 Namespace ID:1 00:20:12.008 Command Set Identifier: NVM (00h) 00:20:12.008 Deallocate: Supported 00:20:12.008 Deallocated/Unwritten Error: Not Supported 00:20:12.008 Deallocated Read Value: Unknown 00:20:12.008 Deallocate in Write Zeroes: Not Supported 00:20:12.008 Deallocated Guard Field: 0xFFFF 00:20:12.008 Flush: Supported 00:20:12.008 Reservation: Not Supported 00:20:12.008 Namespace Sharing Capabilities: Multiple Controllers 00:20:12.008 Size (in LBAs): 1310720 (5GiB) 00:20:12.008 Capacity (in LBAs): 1310720 (5GiB) 00:20:12.008 Utilization (in LBAs): 1310720 (5GiB) 00:20:12.008 UUID: 8de50cae-12b8-429a-a1f7-a65a6b98b012 00:20:12.008 Thin Provisioning: Not Supported 00:20:12.008 Per-NS Atomic Units: Yes 00:20:12.008 Atomic Boundary Size (Normal): 0 00:20:12.008 Atomic Boundary Size (PFail): 0 00:20:12.008 Atomic Boundary Offset: 0 00:20:12.008 NGUID/EUI64 Never Reused: No 00:20:12.008 ANA group ID: 1 00:20:12.008 Namespace Write Protected: No 00:20:12.008 Number of LBA Formats: 1 00:20:12.008 Current LBA Format: LBA Format #00 00:20:12.008 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:12.008 00:20:12.008 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:12.008 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:12.008 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.267 rmmod nvme_tcp 00:20:12.267 rmmod nvme_fabrics 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:12.267 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:20:12.527 00:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:13.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:13.463 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.463 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.463 00:20:13.463 real 0m3.275s 00:20:13.463 user 0m1.193s 00:20:13.463 sys 0m1.440s 00:20:13.463 ************************************ 00:20:13.463 END TEST nvmf_identify_kernel_target 00:20:13.463 ************************************ 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.463 ************************************ 00:20:13.463 START TEST nvmf_auth_host 00:20:13.463 ************************************ 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:13.463 * Looking for test storage... 00:20:13.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:13.463 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:13.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.723 --rc genhtml_branch_coverage=1 00:20:13.723 --rc genhtml_function_coverage=1 00:20:13.723 --rc genhtml_legend=1 00:20:13.723 --rc geninfo_all_blocks=1 00:20:13.723 --rc geninfo_unexecuted_blocks=1 00:20:13.723 00:20:13.723 ' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:13.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.723 --rc genhtml_branch_coverage=1 00:20:13.723 --rc genhtml_function_coverage=1 00:20:13.723 --rc genhtml_legend=1 00:20:13.723 --rc geninfo_all_blocks=1 00:20:13.723 --rc geninfo_unexecuted_blocks=1 00:20:13.723 00:20:13.723 ' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:13.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.723 --rc genhtml_branch_coverage=1 00:20:13.723 --rc genhtml_function_coverage=1 00:20:13.723 --rc genhtml_legend=1 00:20:13.723 --rc geninfo_all_blocks=1 00:20:13.723 --rc geninfo_unexecuted_blocks=1 00:20:13.723 00:20:13.723 ' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:13.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.723 --rc genhtml_branch_coverage=1 00:20:13.723 --rc genhtml_function_coverage=1 00:20:13.723 --rc genhtml_legend=1 00:20:13.723 --rc geninfo_all_blocks=1 00:20:13.723 --rc geninfo_unexecuted_blocks=1 00:20:13.723 00:20:13.723 ' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:13.723 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:13.724 Cannot find device "nvmf_init_br" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:13.724 Cannot find device "nvmf_init_br2" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:13.724 Cannot find device "nvmf_tgt_br" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.724 Cannot find device "nvmf_tgt_br2" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:13.724 Cannot find device "nvmf_init_br" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:13.724 Cannot find device "nvmf_init_br2" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:13.724 Cannot find device "nvmf_tgt_br" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:13.724 Cannot find device "nvmf_tgt_br2" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:13.724 Cannot find device "nvmf_br" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:13.724 Cannot find device "nvmf_init_if" 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:13.724 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:13.984 Cannot find device "nvmf_init_if2" 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:13.984 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:13.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:20:13.985 00:20:13.985 --- 10.0.0.3 ping statistics --- 00:20:13.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.985 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:13.985 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:13.985 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:20:13.985 00:20:13.985 --- 10.0.0.4 ping statistics --- 00:20:13.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.985 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:13.985 00:20:13.985 --- 10.0.0.1 ping statistics --- 00:20:13.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.985 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:13.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:20:13.985 00:20:13.985 --- 10.0.0.2 ping statistics --- 00:20:13.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.985 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:13.985 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:14.244 00:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=92603 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 92603 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92603 ']' 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.244 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=04f5d05ef804bf7cc6283740b4fa0e04 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.VYQ 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 04f5d05ef804bf7cc6283740b4fa0e04 0 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 04f5d05ef804bf7cc6283740b4fa0e04 0 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=04f5d05ef804bf7cc6283740b4fa0e04 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.VYQ 00:20:14.503 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.VYQ 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.VYQ 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=3be4a31b8079cef57dac679d7875d0606009dc68c6ebc98f4748f973432e0f62 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.7Nm 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 3be4a31b8079cef57dac679d7875d0606009dc68c6ebc98f4748f973432e0f62 3 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 3be4a31b8079cef57dac679d7875d0606009dc68c6ebc98f4748f973432e0f62 3 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=3be4a31b8079cef57dac679d7875d0606009dc68c6ebc98f4748f973432e0f62 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:14.504 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.7Nm 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.7Nm 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7Nm 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ef53d88f48b72949947bc700d4042c0dd329c4219e3d2259 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.azY 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ef53d88f48b72949947bc700d4042c0dd329c4219e3d2259 0 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ef53d88f48b72949947bc700d4042c0dd329c4219e3d2259 0 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ef53d88f48b72949947bc700d4042c0dd329c4219e3d2259 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.azY 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.azY 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.azY 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=e58406425c00484362e4b55515408874bb2fd2e2c126085f 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Oyd 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key e58406425c00484362e4b55515408874bb2fd2e2c126085f 2 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 e58406425c00484362e4b55515408874bb2fd2e2c126085f 2 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=e58406425c00484362e4b55515408874bb2fd2e2c126085f 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Oyd 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Oyd 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Oyd 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=58e44b7889ecc0b012de0b15577bf7ce 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.6Ft 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 58e44b7889ecc0b012de0b15577bf7ce 1 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 58e44b7889ecc0b012de0b15577bf7ce 1 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=58e44b7889ecc0b012de0b15577bf7ce 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.6Ft 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.6Ft 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6Ft 00:20:14.763 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=48ec0950e8aaeb4f7cb3af5a2889bbe4 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.WLV 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 48ec0950e8aaeb4f7cb3af5a2889bbe4 1 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 48ec0950e8aaeb4f7cb3af5a2889bbe4 1 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=48ec0950e8aaeb4f7cb3af5a2889bbe4 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:14.764 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.WLV 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.WLV 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.WLV 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=538b9febb9f4448eebd270594055b64bed5c25c4a262f2b5 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.L6G 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 538b9febb9f4448eebd270594055b64bed5c25c4a262f2b5 2 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 538b9febb9f4448eebd270594055b64bed5c25c4a262f2b5 2 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=538b9febb9f4448eebd270594055b64bed5c25c4a262f2b5 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.L6G 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.L6G 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.L6G 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=72a9d8f4f8f0301ce9a0b110ac1b3ed4 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.E8y 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 72a9d8f4f8f0301ce9a0b110ac1b3ed4 0 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 72a9d8f4f8f0301ce9a0b110ac1b3ed4 0 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=72a9d8f4f8f0301ce9a0b110ac1b3ed4 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.E8y 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.E8y 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.E8y 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=91ccc015643226fe33671c63d1cb543a1a5074177609415f92b4801afffa5463 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.hYg 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 91ccc015643226fe33671c63d1cb543a1a5074177609415f92b4801afffa5463 3 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 91ccc015643226fe33671c63d1cb543a1a5074177609415f92b4801afffa5463 3 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=91ccc015643226fe33671c63d1cb543a1a5074177609415f92b4801afffa5463 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.hYg 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.hYg 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hYg 00:20:15.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92603 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92603 ']' 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.023 00:36:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VYQ 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7Nm ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Nm 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.azY 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Oyd ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Oyd 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6Ft 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.WLV ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.WLV 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.L6G 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.E8y ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.E8y 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hYg 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:20:15.591 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:15.592 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:15.592 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:15.592 00:36:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:15.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:15.850 Waiting for block devices as requested 00:20:15.850 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:16.109 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:16.676 No valid GPT data, bailing 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:16.676 No valid GPT data, bailing 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:16.676 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:16.935 No valid GPT data, bailing 00:20:16.935 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:16.935 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:16.935 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:16.935 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:16.936 No valid GPT data, bailing 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -a 10.0.0.1 -t tcp -s 4420 00:20:16.936 00:20:16.936 Discovery Log Number of Records 2, Generation counter 2 00:20:16.936 =====Discovery Log Entry 0====== 00:20:16.936 trtype: tcp 00:20:16.936 adrfam: ipv4 00:20:16.936 subtype: current discovery subsystem 00:20:16.936 treq: not specified, sq flow control disable supported 00:20:16.936 portid: 1 00:20:16.936 trsvcid: 4420 00:20:16.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:16.936 traddr: 10.0.0.1 00:20:16.936 eflags: none 00:20:16.936 sectype: none 00:20:16.936 =====Discovery Log Entry 1====== 00:20:16.936 trtype: tcp 00:20:16.936 adrfam: ipv4 00:20:16.936 subtype: nvme subsystem 00:20:16.936 treq: not specified, sq flow control disable supported 00:20:16.936 portid: 1 00:20:16.936 trsvcid: 4420 00:20:16.936 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:16.936 traddr: 10.0.0.1 00:20:16.936 eflags: none 00:20:16.936 sectype: none 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.936 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.195 00:36:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 nvme0n1 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.195 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.454 nvme0n1 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.454 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.713 nvme0n1 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.713 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.714 nvme0n1 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.714 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.973 nvme0n1 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.973 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:17.974 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:17.974 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:17.974 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.974 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.974 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.233 nvme0n1 00:20:18.233 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.233 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.233 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.233 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.233 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.233 00:36:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.233 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.492 nvme0n1 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.492 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.752 nvme0n1 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:18.752 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.753 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.012 nvme0n1 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.012 00:36:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.276 nvme0n1 00:20:19.276 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.276 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.277 nvme0n1 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.277 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.536 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:20.103 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 00:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 nvme0n1 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.104 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.364 nvme0n1 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.364 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.624 nvme0n1 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.624 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.884 nvme0n1 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.884 00:36:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.143 nvme0n1 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.143 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:21.144 00:36:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:23.049 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:23.049 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:23.049 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.050 nvme0n1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.050 00:36:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.310 nvme0n1 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.310 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:23.570 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.571 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.830 nvme0n1 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.830 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.831 00:36:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.118 nvme0n1 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.118 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.119 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.401 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.660 nvme0n1 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:24.660 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.661 00:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.229 nvme0n1 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.230 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.799 nvme0n1 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.799 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.800 00:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.369 nvme0n1 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.369 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.983 nvme0n1 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.983 00:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.552 nvme0n1 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.552 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.811 nvme0n1 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:27.811 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.812 nvme0n1 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.812 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.072 nvme0n1 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.072 00:36:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.072 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.332 nvme0n1 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.332 nvme0n1 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.332 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.592 nvme0n1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.592 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.852 nvme0n1 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.852 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.112 nvme0n1 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.112 00:36:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.112 nvme0n1 00:20:29.112 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.112 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.112 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.112 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.112 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.112 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.372 nvme0n1 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.372 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.631 nvme0n1 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.631 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.890 nvme0n1 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.890 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.891 00:36:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.150 nvme0n1 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.150 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.410 nvme0n1 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.410 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.670 nvme0n1 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.670 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.238 nvme0n1 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.238 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.239 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.239 00:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.239 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.498 nvme0n1 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.498 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.757 nvme0n1 00:20:31.757 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.757 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.758 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.758 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.758 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.758 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.017 00:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.276 nvme0n1 00:20:32.276 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.276 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.276 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.276 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.276 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.276 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.276 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.277 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.536 nvme0n1 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.536 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.795 00:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.363 nvme0n1 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:33.363 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.364 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 nvme0n1 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.932 00:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.500 nvme0n1 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:34.500 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.501 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.070 nvme0n1 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.070 00:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 nvme0n1 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.638 nvme0n1 00:20:35.638 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 nvme0n1 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.899 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.158 nvme0n1 00:20:36.158 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.158 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.158 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.158 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.158 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.158 00:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.158 nvme0n1 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.158 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.418 nvme0n1 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.418 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:36.419 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.678 nvme0n1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.678 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.937 nvme0n1 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.937 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.938 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.197 nvme0n1 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.197 00:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:37.197 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.198 nvme0n1 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.198 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 nvme0n1 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.458 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.718 nvme0n1 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.718 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.977 nvme0n1 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.977 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:37.978 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.978 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:37.978 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:37.978 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:37.978 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.978 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.978 00:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 nvme0n1 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.237 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.497 nvme0n1 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.497 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.756 nvme0n1 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:38.756 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.757 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.016 nvme0n1 00:20:39.017 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.017 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.017 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.017 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.017 00:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.017 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.282 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.542 nvme0n1 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:39.542 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.801 nvme0n1 00:20:39.801 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.801 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.801 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.801 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.801 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.801 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.060 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.061 00:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.320 nvme0n1 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.320 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.579 nvme0n1 00:20:40.579 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.579 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.579 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.579 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.579 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.579 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRmNWQwNWVmODA0YmY3Y2M2MjgzNzQwYjRmYTBlMDShj6zv: 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2JlNGEzMWI4MDc5Y2VmNTdkYWM2NzlkNzg3NWQwNjA2MDA5ZGM2OGM2ZWJjOThmNDc0OGY5NzM0MzJlMGY2Moxlf0g=: 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.838 00:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.406 nvme0n1 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.406 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.407 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.974 nvme0n1 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.975 00:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.558 nvme0n1 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTM4YjlmZWJiOWY0NDQ4ZWViZDI3MDU5NDA1NWI2NGJlZDVjMjVjNGEyNjJmMmI1v2CFBw==: 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzJhOWQ4ZjRmOGYwMzAxY2U5YTBiMTEwYWMxYjNlZDSYiJEG: 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.558 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 nvme0n1 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTFjY2MwMTU2NDMyMjZmZTMzNjcxYzYzZDFjYjU0M2ExYTUwNzQxNzc2MDk0MTVmOTJiNDgwMWFmZmZhNTQ2M7kLdDI=: 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.139 00:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.707 nvme0n1 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.707 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.708 request: 00:20:43.708 { 00:20:43.708 "name": "nvme0", 00:20:43.708 "trtype": "tcp", 00:20:43.708 "traddr": "10.0.0.1", 00:20:43.708 "adrfam": "ipv4", 00:20:43.708 "trsvcid": "4420", 00:20:43.708 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:43.708 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:43.708 "prchk_reftag": false, 00:20:43.708 "prchk_guard": false, 00:20:43.708 "hdgst": false, 00:20:43.708 "ddgst": false, 00:20:43.708 "allow_unrecognized_csi": false, 00:20:43.708 "method": "bdev_nvme_attach_controller", 00:20:43.708 "req_id": 1 00:20:43.708 } 00:20:43.708 Got JSON-RPC error response 00:20:43.708 response: 00:20:43.708 { 00:20:43.708 "code": -5, 00:20:43.708 "message": "Input/output error" 00:20:43.708 } 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.708 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.968 request: 00:20:43.968 { 00:20:43.968 "name": "nvme0", 00:20:43.968 "trtype": "tcp", 00:20:43.968 "traddr": "10.0.0.1", 00:20:43.968 "adrfam": "ipv4", 00:20:43.968 "trsvcid": "4420", 00:20:43.968 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:43.968 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:43.968 "prchk_reftag": false, 00:20:43.968 "prchk_guard": false, 00:20:43.968 "hdgst": false, 00:20:43.968 "ddgst": false, 00:20:43.968 "dhchap_key": "key2", 00:20:43.968 "allow_unrecognized_csi": false, 00:20:43.968 "method": "bdev_nvme_attach_controller", 00:20:43.968 "req_id": 1 00:20:43.968 } 00:20:43.968 Got JSON-RPC error response 00:20:43.968 response: 00:20:43.968 { 00:20:43.968 "code": -5, 00:20:43.968 "message": "Input/output error" 00:20:43.968 } 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.968 request: 00:20:43.968 { 00:20:43.968 "name": "nvme0", 00:20:43.968 "trtype": "tcp", 00:20:43.968 "traddr": "10.0.0.1", 00:20:43.968 "adrfam": "ipv4", 00:20:43.968 "trsvcid": "4420", 00:20:43.968 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:43.968 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:43.968 "prchk_reftag": false, 00:20:43.968 "prchk_guard": false, 00:20:43.968 "hdgst": false, 00:20:43.968 "ddgst": false, 00:20:43.968 "dhchap_key": "key1", 00:20:43.968 "dhchap_ctrlr_key": "ckey2", 00:20:43.968 "allow_unrecognized_csi": false, 00:20:43.968 "method": "bdev_nvme_attach_controller", 00:20:43.968 "req_id": 1 00:20:43.968 } 00:20:43.968 Got JSON-RPC error response 00:20:43.968 response: 00:20:43.968 { 00:20:43.968 "code": -5, 00:20:43.968 "message": "Input/output error" 00:20:43.968 } 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.968 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.969 nvme0n1 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.969 00:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.228 request: 00:20:44.228 { 00:20:44.228 "name": "nvme0", 00:20:44.228 "dhchap_key": "key1", 00:20:44.228 "dhchap_ctrlr_key": "ckey2", 00:20:44.228 "method": "bdev_nvme_set_keys", 00:20:44.228 "req_id": 1 00:20:44.228 } 00:20:44.228 Got JSON-RPC error response 00:20:44.228 response: 00:20:44.228 { 00:20:44.228 "code": -13, 00:20:44.228 "message": "Permission denied" 00:20:44.228 } 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:44.228 00:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWY1M2Q4OGY0OGI3Mjk0OTk0N2JjNzAwZDQwNDJjMGRkMzI5YzQyMTllM2QyMjU57PSWyg==: 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: ]] 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTU4NDA2NDI1YzAwNDg0MzYyZTRiNTU1MTU0MDg4NzRiYjJmZDJlMmMxMjYwODVmjAPCzg==: 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.165 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.424 nvme0n1 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NThlNDRiNzg4OWVjYzBiMDEyZGUwYjE1NTc3YmY3Y2UlKigi: 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: ]] 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDhlYzA5NTBlOGFhZWI0ZjdjYjNhZjVhMjg4OWJiZTQKR64T: 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.424 request: 00:20:45.424 { 00:20:45.424 "name": "nvme0", 00:20:45.424 "dhchap_key": "key2", 00:20:45.424 "dhchap_ctrlr_key": "ckey1", 00:20:45.424 "method": "bdev_nvme_set_keys", 00:20:45.424 "req_id": 1 00:20:45.424 } 00:20:45.424 Got JSON-RPC error response 00:20:45.424 response: 00:20:45.424 { 00:20:45.424 "code": -13, 00:20:45.424 "message": "Permission denied" 00:20:45.424 } 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:45.424 00:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:46.361 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.361 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:46.361 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.361 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.362 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.621 rmmod nvme_tcp 00:20:46.621 rmmod nvme_fabrics 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 92603 ']' 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 92603 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 92603 ']' 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 92603 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92603 00:20:46.621 killing process with pid 92603 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92603' 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 92603 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 92603 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:46.621 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:20:46.881 00:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:47.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:47.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.818 00:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.VYQ /tmp/spdk.key-null.azY /tmp/spdk.key-sha256.6Ft /tmp/spdk.key-sha384.L6G /tmp/spdk.key-sha512.hYg /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:47.818 00:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:48.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:48.387 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:48.387 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:48.387 00:20:48.387 real 0m34.780s 00:20:48.387 user 0m32.182s 00:20:48.387 sys 0m3.767s 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.387 ************************************ 00:20:48.387 END TEST nvmf_auth_host 00:20:48.387 ************************************ 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.387 ************************************ 00:20:48.387 START TEST nvmf_digest 00:20:48.387 ************************************ 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:48.387 * Looking for test storage... 00:20:48.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.387 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.647 --rc genhtml_branch_coverage=1 00:20:48.647 --rc genhtml_function_coverage=1 00:20:48.647 --rc genhtml_legend=1 00:20:48.647 --rc geninfo_all_blocks=1 00:20:48.647 --rc geninfo_unexecuted_blocks=1 00:20:48.647 00:20:48.647 ' 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.647 --rc genhtml_branch_coverage=1 00:20:48.647 --rc genhtml_function_coverage=1 00:20:48.647 --rc genhtml_legend=1 00:20:48.647 --rc geninfo_all_blocks=1 00:20:48.647 --rc geninfo_unexecuted_blocks=1 00:20:48.647 00:20:48.647 ' 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.647 --rc genhtml_branch_coverage=1 00:20:48.647 --rc genhtml_function_coverage=1 00:20:48.647 --rc genhtml_legend=1 00:20:48.647 --rc geninfo_all_blocks=1 00:20:48.647 --rc geninfo_unexecuted_blocks=1 00:20:48.647 00:20:48.647 ' 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:48.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.647 --rc genhtml_branch_coverage=1 00:20:48.647 --rc genhtml_function_coverage=1 00:20:48.647 --rc genhtml_legend=1 00:20:48.647 --rc geninfo_all_blocks=1 00:20:48.647 --rc geninfo_unexecuted_blocks=1 00:20:48.647 00:20:48.647 ' 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.647 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.648 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:48.648 Cannot find device "nvmf_init_br" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:48.648 Cannot find device "nvmf_init_br2" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:48.648 Cannot find device "nvmf_tgt_br" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:48.648 Cannot find device "nvmf_tgt_br2" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:48.648 Cannot find device "nvmf_init_br" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:48.648 Cannot find device "nvmf_init_br2" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:48.648 Cannot find device "nvmf_tgt_br" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:48.648 Cannot find device "nvmf_tgt_br2" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:48.648 Cannot find device "nvmf_br" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:48.648 Cannot find device "nvmf_init_if" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:48.648 Cannot find device "nvmf_init_if2" 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:48.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:48.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:48.648 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:48.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:48.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:20:48.908 00:20:48.908 --- 10.0.0.3 ping statistics --- 00:20:48.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.908 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:48.908 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:48.908 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:20:48.908 00:20:48.908 --- 10.0.0.4 ping statistics --- 00:20:48.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.908 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:48.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:48.908 00:20:48.908 --- 10.0.0.1 ping statistics --- 00:20:48.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.908 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:48.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:20:48.908 00:20:48.908 --- 10.0.0.2 ping statistics --- 00:20:48.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.908 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:48.908 ************************************ 00:20:48.908 START TEST nvmf_digest_clean 00:20:48.908 ************************************ 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=94242 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 94242 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94242 ']' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.908 00:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:49.168 [2024-12-17 00:36:34.925149] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:49.168 [2024-12-17 00:36:34.925243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.168 [2024-12-17 00:36:35.062495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.168 [2024-12-17 00:36:35.105340] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.168 [2024-12-17 00:36:35.105404] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.168 [2024-12-17 00:36:35.105419] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.168 [2024-12-17 00:36:35.105429] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.168 [2024-12-17 00:36:35.105437] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.168 [2024-12-17 00:36:35.105471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 [2024-12-17 00:36:35.265441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:49.427 null0 00:20:49.427 [2024-12-17 00:36:35.301130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.427 [2024-12-17 00:36:35.325228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94263 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94263 /var/tmp/bperf.sock 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94263 ']' 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.427 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 [2024-12-17 00:36:35.387180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:49.427 [2024-12-17 00:36:35.387275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94263 ] 00:20:49.685 [2024-12-17 00:36:35.527952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.685 [2024-12-17 00:36:35.571270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.685 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.685 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:20:49.685 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:49.685 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:49.685 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:49.944 [2024-12-17 00:36:35.930056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:50.202 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:50.202 00:36:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:50.461 nvme0n1 00:20:50.461 00:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:50.461 00:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:50.461 Running I/O for 2 seconds... 00:20:52.771 17780.00 IOPS, 69.45 MiB/s [2024-12-17T00:36:38.774Z] 17780.00 IOPS, 69.45 MiB/s 00:20:52.771 Latency(us) 00:20:52.771 [2024-12-17T00:36:38.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.771 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:52.771 nvme0n1 : 2.01 17813.89 69.59 0.00 0.00 7180.29 6613.18 18230.92 00:20:52.771 [2024-12-17T00:36:38.774Z] =================================================================================================================== 00:20:52.771 [2024-12-17T00:36:38.774Z] Total : 17813.89 69.59 0.00 0.00 7180.29 6613.18 18230.92 00:20:52.771 { 00:20:52.771 "results": [ 00:20:52.771 { 00:20:52.771 "job": "nvme0n1", 00:20:52.771 "core_mask": "0x2", 00:20:52.771 "workload": "randread", 00:20:52.771 "status": "finished", 00:20:52.771 "queue_depth": 128, 00:20:52.771 "io_size": 4096, 00:20:52.771 "runtime": 2.01051, 00:20:52.771 "iops": 17813.888018462978, 00:20:52.771 "mibps": 69.58550007212101, 00:20:52.771 "io_failed": 0, 00:20:52.771 "io_timeout": 0, 00:20:52.771 "avg_latency_us": 7180.29377025878, 00:20:52.771 "min_latency_us": 6613.178181818182, 00:20:52.771 "max_latency_us": 18230.923636363637 00:20:52.771 } 00:20:52.771 ], 00:20:52.771 "core_count": 1 00:20:52.771 } 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:52.771 | select(.opcode=="crc32c") 00:20:52.771 | "\(.module_name) \(.executed)"' 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94263 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94263 ']' 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94263 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94263 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:52.771 killing process with pid 94263 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94263' 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94263 00:20:52.771 Received shutdown signal, test time was about 2.000000 seconds 00:20:52.771 00:20:52.771 Latency(us) 00:20:52.771 [2024-12-17T00:36:38.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.771 [2024-12-17T00:36:38.774Z] =================================================================================================================== 00:20:52.771 [2024-12-17T00:36:38.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.771 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94263 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94314 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94314 /var/tmp/bperf.sock 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94314 ']' 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:53.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.030 00:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:53.030 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:53.031 Zero copy mechanism will not be used. 00:20:53.031 [2024-12-17 00:36:38.941763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:53.031 [2024-12-17 00:36:38.941861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94314 ] 00:20:53.289 [2024-12-17 00:36:39.081082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.289 [2024-12-17 00:36:39.115433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.289 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.289 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:20:53.289 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:53.289 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:53.289 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:53.548 [2024-12-17 00:36:39.398602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:53.548 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:53.548 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:53.807 nvme0n1 00:20:53.807 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:53.807 00:36:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:54.065 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:54.065 Zero copy mechanism will not be used. 00:20:54.065 Running I/O for 2 seconds... 00:20:55.938 8624.00 IOPS, 1078.00 MiB/s [2024-12-17T00:36:41.941Z] 8688.00 IOPS, 1086.00 MiB/s 00:20:55.938 Latency(us) 00:20:55.938 [2024-12-17T00:36:41.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:55.938 nvme0n1 : 2.00 8685.74 1085.72 0.00 0.00 1839.34 1653.29 8579.26 00:20:55.938 [2024-12-17T00:36:41.941Z] =================================================================================================================== 00:20:55.938 [2024-12-17T00:36:41.941Z] Total : 8685.74 1085.72 0.00 0.00 1839.34 1653.29 8579.26 00:20:55.938 { 00:20:55.938 "results": [ 00:20:55.938 { 00:20:55.938 "job": "nvme0n1", 00:20:55.938 "core_mask": "0x2", 00:20:55.938 "workload": "randread", 00:20:55.938 "status": "finished", 00:20:55.938 "queue_depth": 16, 00:20:55.938 "io_size": 131072, 00:20:55.938 "runtime": 2.002362, 00:20:55.938 "iops": 8685.742138534391, 00:20:55.938 "mibps": 1085.717767316799, 00:20:55.938 "io_failed": 0, 00:20:55.938 "io_timeout": 0, 00:20:55.938 "avg_latency_us": 1839.338268796521, 00:20:55.938 "min_latency_us": 1653.2945454545454, 00:20:55.938 "max_latency_us": 8579.258181818182 00:20:55.938 } 00:20:55.938 ], 00:20:55.938 "core_count": 1 00:20:55.938 } 00:20:55.938 00:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:55.938 00:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:55.938 00:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:55.938 00:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:55.938 | select(.opcode=="crc32c") 00:20:55.938 | "\(.module_name) \(.executed)"' 00:20:55.938 00:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94314 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94314 ']' 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94314 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94314 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:56.197 killing process with pid 94314 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94314' 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94314 00:20:56.197 Received shutdown signal, test time was about 2.000000 seconds 00:20:56.197 00:20:56.197 Latency(us) 00:20:56.197 [2024-12-17T00:36:42.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.197 [2024-12-17T00:36:42.200Z] =================================================================================================================== 00:20:56.197 [2024-12-17T00:36:42.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.197 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94314 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94361 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94361 /var/tmp/bperf.sock 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94361 ']' 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:56.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:56.456 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:56.456 [2024-12-17 00:36:42.368488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:56.456 [2024-12-17 00:36:42.368614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94361 ] 00:20:56.715 [2024-12-17 00:36:42.502197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.715 [2024-12-17 00:36:42.538865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.715 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.715 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:20:56.715 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:56.715 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:56.715 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:56.974 [2024-12-17 00:36:42.821858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.974 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:56.974 00:36:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.232 nvme0n1 00:20:57.232 00:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:57.232 00:36:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:57.490 Running I/O for 2 seconds... 00:20:59.361 19051.00 IOPS, 74.42 MiB/s [2024-12-17T00:36:45.364Z] 19177.50 IOPS, 74.91 MiB/s 00:20:59.361 Latency(us) 00:20:59.361 [2024-12-17T00:36:45.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.361 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:59.361 nvme0n1 : 2.00 19199.21 75.00 0.00 0.00 6661.62 6106.76 14358.34 00:20:59.361 [2024-12-17T00:36:45.364Z] =================================================================================================================== 00:20:59.361 [2024-12-17T00:36:45.364Z] Total : 19199.21 75.00 0.00 0.00 6661.62 6106.76 14358.34 00:20:59.361 { 00:20:59.361 "results": [ 00:20:59.361 { 00:20:59.361 "job": "nvme0n1", 00:20:59.361 "core_mask": "0x2", 00:20:59.361 "workload": "randwrite", 00:20:59.361 "status": "finished", 00:20:59.361 "queue_depth": 128, 00:20:59.361 "io_size": 4096, 00:20:59.361 "runtime": 2.004405, 00:20:59.361 "iops": 19199.213731755808, 00:20:59.361 "mibps": 74.99692863967113, 00:20:59.361 "io_failed": 0, 00:20:59.361 "io_timeout": 0, 00:20:59.361 "avg_latency_us": 6661.619611020687, 00:20:59.361 "min_latency_us": 6106.763636363637, 00:20:59.361 "max_latency_us": 14358.341818181818 00:20:59.361 } 00:20:59.361 ], 00:20:59.361 "core_count": 1 00:20:59.361 } 00:20:59.361 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:59.361 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:59.361 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:59.361 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:59.361 | select(.opcode=="crc32c") 00:20:59.361 | "\(.module_name) \(.executed)"' 00:20:59.361 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94361 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94361 ']' 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94361 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94361 00:20:59.928 killing process with pid 94361 00:20:59.928 Received shutdown signal, test time was about 2.000000 seconds 00:20:59.928 00:20:59.928 Latency(us) 00:20:59.928 [2024-12-17T00:36:45.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.928 [2024-12-17T00:36:45.931Z] =================================================================================================================== 00:20:59.928 [2024-12-17T00:36:45.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94361' 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94361 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94361 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94415 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94415 /var/tmp/bperf.sock 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94415 ']' 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:59.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.928 00:36:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:59.928 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:59.928 Zero copy mechanism will not be used. 00:20:59.928 [2024-12-17 00:36:45.841849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:59.929 [2024-12-17 00:36:45.841951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94415 ] 00:21:00.187 [2024-12-17 00:36:45.971839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.187 [2024-12-17 00:36:46.005520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.187 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.187 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:00.187 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:00.187 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:00.187 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:00.446 [2024-12-17 00:36:46.329052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:00.446 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.446 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.704 nvme0n1 00:21:00.963 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:00.963 00:36:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:00.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.963 Zero copy mechanism will not be used. 00:21:00.963 Running I/O for 2 seconds... 00:21:03.279 7459.00 IOPS, 932.38 MiB/s [2024-12-17T00:36:49.282Z] 7510.00 IOPS, 938.75 MiB/s 00:21:03.279 Latency(us) 00:21:03.279 [2024-12-17T00:36:49.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.279 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:03.279 nvme0n1 : 2.00 7505.08 938.14 0.00 0.00 2127.08 1422.43 4170.47 00:21:03.279 [2024-12-17T00:36:49.282Z] =================================================================================================================== 00:21:03.279 [2024-12-17T00:36:49.282Z] Total : 7505.08 938.14 0.00 0.00 2127.08 1422.43 4170.47 00:21:03.279 { 00:21:03.279 "results": [ 00:21:03.279 { 00:21:03.279 "job": "nvme0n1", 00:21:03.279 "core_mask": "0x2", 00:21:03.279 "workload": "randwrite", 00:21:03.279 "status": "finished", 00:21:03.279 "queue_depth": 16, 00:21:03.279 "io_size": 131072, 00:21:03.279 "runtime": 2.004108, 00:21:03.279 "iops": 7505.084556321316, 00:21:03.279 "mibps": 938.1355695401645, 00:21:03.279 "io_failed": 0, 00:21:03.279 "io_timeout": 0, 00:21:03.279 "avg_latency_us": 2127.0811636073518, 00:21:03.279 "min_latency_us": 1422.429090909091, 00:21:03.279 "max_latency_us": 4170.472727272727 00:21:03.279 } 00:21:03.279 ], 00:21:03.279 "core_count": 1 00:21:03.279 } 00:21:03.279 00:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:03.279 00:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:03.279 00:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:03.279 00:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:03.279 | select(.opcode=="crc32c") 00:21:03.279 | "\(.module_name) \(.executed)"' 00:21:03.279 00:36:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94415 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94415 ']' 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94415 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94415 00:21:03.279 killing process with pid 94415 00:21:03.279 Received shutdown signal, test time was about 2.000000 seconds 00:21:03.279 00:21:03.279 Latency(us) 00:21:03.279 [2024-12-17T00:36:49.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.279 [2024-12-17T00:36:49.282Z] =================================================================================================================== 00:21:03.279 [2024-12-17T00:36:49.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94415' 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94415 00:21:03.279 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94415 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94242 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94242 ']' 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94242 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94242 00:21:03.552 killing process with pid 94242 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94242' 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94242 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94242 00:21:03.552 00:21:03.552 real 0m14.628s 00:21:03.552 user 0m28.560s 00:21:03.552 sys 0m4.217s 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.552 ************************************ 00:21:03.552 END TEST nvmf_digest_clean 00:21:03.552 ************************************ 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:03.552 ************************************ 00:21:03.552 START TEST nvmf_digest_error 00:21:03.552 ************************************ 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=94486 00:21:03.552 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 94486 00:21:03.834 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:03.834 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94486 ']' 00:21:03.834 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.834 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.834 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.834 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.834 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:03.834 [2024-12-17 00:36:49.611341] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:03.834 [2024-12-17 00:36:49.611439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.834 [2024-12-17 00:36:49.749009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.834 [2024-12-17 00:36:49.782312] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.834 [2024-12-17 00:36:49.782387] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.834 [2024-12-17 00:36:49.782413] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.834 [2024-12-17 00:36:49.782419] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.834 [2024-12-17 00:36:49.782425] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.834 [2024-12-17 00:36:49.782450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:04.107 [2024-12-17 00:36:49.882822] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:04.107 [2024-12-17 00:36:49.921752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:04.107 null0 00:21:04.107 [2024-12-17 00:36:49.952795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.107 [2024-12-17 00:36:49.976967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94515 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94515 /var/tmp/bperf.sock 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94515 ']' 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:04.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:04.107 00:36:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:04.107 [2024-12-17 00:36:50.041423] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:04.107 [2024-12-17 00:36:50.041528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94515 ] 00:21:04.366 [2024-12-17 00:36:50.178351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.366 [2024-12-17 00:36:50.211615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.366 [2024-12-17 00:36:50.239894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:04.934 00:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.934 00:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:04.934 00:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:04.934 00:36:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.502 nvme0n1 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.502 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:05.761 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.761 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:05.761 00:36:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:05.761 Running I/O for 2 seconds... 00:21:05.761 [2024-12-17 00:36:51.624115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.761 [2024-12-17 00:36:51.624176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.761 [2024-12-17 00:36:51.624207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.761 [2024-12-17 00:36:51.638556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.761 [2024-12-17 00:36:51.638610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.761 [2024-12-17 00:36:51.638622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.761 [2024-12-17 00:36:51.652707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.761 [2024-12-17 00:36:51.652760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.652774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.762 [2024-12-17 00:36:51.667066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.762 [2024-12-17 00:36:51.667116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.667143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.762 [2024-12-17 00:36:51.681269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.762 [2024-12-17 00:36:51.681342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.681356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.762 [2024-12-17 00:36:51.695473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.762 [2024-12-17 00:36:51.695524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.695537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.762 [2024-12-17 00:36:51.710951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.762 [2024-12-17 00:36:51.711016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.711044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.762 [2024-12-17 00:36:51.728368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.762 [2024-12-17 00:36:51.728431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.728446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.762 [2024-12-17 00:36:51.744235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.762 [2024-12-17 00:36:51.744285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.744330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.762 [2024-12-17 00:36:51.758646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:05.762 [2024-12-17 00:36:51.758694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.762 [2024-12-17 00:36:51.758723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.021 [2024-12-17 00:36:51.774977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.021 [2024-12-17 00:36:51.775030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.021 [2024-12-17 00:36:51.775059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.021 [2024-12-17 00:36:51.789193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.021 [2024-12-17 00:36:51.789244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.021 [2024-12-17 00:36:51.789273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.021 [2024-12-17 00:36:51.803265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.021 [2024-12-17 00:36:51.803338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.021 [2024-12-17 00:36:51.803353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.021 [2024-12-17 00:36:51.817601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.817653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.817665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.831662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.831710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.831738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.845903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.845952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.845980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.860066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.860116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.860144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.874450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.874499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.874527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.889278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.889354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.889370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.903872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.903922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.903951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.917930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.917978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.918006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.931930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.931978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.932007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.946541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.946592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.946620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.960606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.960658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.974627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.974675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.974703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:51.988860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:51.988910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:51.988939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:52.002843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:52.002892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:52.002921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.022 [2024-12-17 00:36:52.017035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.022 [2024-12-17 00:36:52.017084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.022 [2024-12-17 00:36:52.017112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.281 [2024-12-17 00:36:52.032256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.032333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.032349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.046411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.046463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.046476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.060390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.060440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.060468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.074601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.074653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.074666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.088743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.088795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.088808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.102869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.102918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.102946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.117084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.117132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.117160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.131090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.131139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.131167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.145264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.145336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.145350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.159284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.159359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.159372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.173333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.173390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.173419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.187405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.187455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.187467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.201425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.201473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.201501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.215490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.215541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.215553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.229559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.229607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.229636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.243770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.243818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.243846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.257875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.257924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.257952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.282 [2024-12-17 00:36:52.271895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.282 [2024-12-17 00:36:52.271943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.282 [2024-12-17 00:36:52.271971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.541 [2024-12-17 00:36:52.287912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.287965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.287994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.306333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.306414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.306430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.321999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.322062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.322076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.336932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.336996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.337024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.352033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.352082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.352111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.367200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.367250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.367279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.382383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.382432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.382445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.397438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.397487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.397515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.412349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.412405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.412419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.427479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.427530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.427543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.442592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.442629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.442642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.457669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.457733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.457762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.472866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.472931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.472973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.487621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.487669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.487697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.501879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.501927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.501954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.516413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.516461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.516504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.542 [2024-12-17 00:36:52.530697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.542 [2024-12-17 00:36:52.530761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.542 [2024-12-17 00:36:52.530789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.801 [2024-12-17 00:36:52.552021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.801 [2024-12-17 00:36:52.552074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.801 [2024-12-17 00:36:52.552102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.801 [2024-12-17 00:36:52.566168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.801 [2024-12-17 00:36:52.566219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.801 [2024-12-17 00:36:52.566247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.801 [2024-12-17 00:36:52.580362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.801 [2024-12-17 00:36:52.580412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.801 [2024-12-17 00:36:52.580440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.801 [2024-12-17 00:36:52.594475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.801 [2024-12-17 00:36:52.594527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.801 [2024-12-17 00:36:52.594540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.801 17332.00 IOPS, 67.70 MiB/s [2024-12-17T00:36:52.804Z] [2024-12-17 00:36:52.610006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.801 [2024-12-17 00:36:52.610058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.610086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.624056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.624105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.624134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.638288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.638362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.638375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.652382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.652430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.652458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.666470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.666521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.666533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.680485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.680533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.680584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.694673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.694737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.694765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.708842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.708907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.708936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.723069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.723117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.723145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.739474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.739523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.739538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.756214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.756263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.756290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.771165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.771214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.771242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.785419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.785482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.785495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.802 [2024-12-17 00:36:52.799451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:06.802 [2024-12-17 00:36:52.799498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.802 [2024-12-17 00:36:52.799527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.814886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.814939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.814968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.829049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.829100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.829128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.843497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.843549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.843562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.857555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.857605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.857633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.873052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.873102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.873131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.887848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.887899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.902478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.902554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.916637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.916689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.916702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.930626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.930676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.930703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.944511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.944584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.944598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.958525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.958574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.958601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.972605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.972658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.972671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:52.986644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:52.986693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:52.986721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:53.000729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:53.000780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:53.000793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:53.014700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:53.014748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:53.014775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:53.028669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:53.028721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:53.028734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:53.042641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:53.042690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:53.042718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.062 [2024-12-17 00:36:53.056751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.062 [2024-12-17 00:36:53.056803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.062 [2024-12-17 00:36:53.056815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.072196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.072250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.072279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.086333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.086392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.086421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.100237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.100288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.100317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.114268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.114341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.114354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.128254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.128304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.128341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.142176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.142224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.142252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.156117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.156166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.156193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.170092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.170141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.170170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.184075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.184124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.184151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.198088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.198136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.198165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.212054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.212102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.212129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.226161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.321 [2024-12-17 00:36:53.226210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.321 [2024-12-17 00:36:53.226238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.321 [2024-12-17 00:36:53.240055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.322 [2024-12-17 00:36:53.240103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.322 [2024-12-17 00:36:53.240131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.322 [2024-12-17 00:36:53.254087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.322 [2024-12-17 00:36:53.254135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.322 [2024-12-17 00:36:53.254163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.322 [2024-12-17 00:36:53.268041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.322 [2024-12-17 00:36:53.268090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.322 [2024-12-17 00:36:53.268118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.322 [2024-12-17 00:36:53.282189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.322 [2024-12-17 00:36:53.282240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.322 [2024-12-17 00:36:53.282267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.322 [2024-12-17 00:36:53.296185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.322 [2024-12-17 00:36:53.296235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.322 [2024-12-17 00:36:53.296263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.322 [2024-12-17 00:36:53.310331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.322 [2024-12-17 00:36:53.310379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.322 [2024-12-17 00:36:53.310407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.322 [2024-12-17 00:36:53.324901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.322 [2024-12-17 00:36:53.324969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.322 [2024-12-17 00:36:53.324998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.339464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.339516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.339544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.353637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.353673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.353686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.367684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.367733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.367762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.381893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.381942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.381970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.397717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.397767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.397796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.411910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.411958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.411986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.426132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.426180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.426208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.440102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.440149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.440177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.454179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.454226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.454254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.474703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.474781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.490986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.581 [2024-12-17 00:36:53.491037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.581 [2024-12-17 00:36:53.491067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.581 [2024-12-17 00:36:53.507526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.582 [2024-12-17 00:36:53.507577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.582 [2024-12-17 00:36:53.507606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.582 [2024-12-17 00:36:53.523228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.582 [2024-12-17 00:36:53.523278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.582 [2024-12-17 00:36:53.523306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.582 [2024-12-17 00:36:53.538541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.582 [2024-12-17 00:36:53.538594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.582 [2024-12-17 00:36:53.538607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.582 [2024-12-17 00:36:53.553609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.582 [2024-12-17 00:36:53.553659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.582 [2024-12-17 00:36:53.553687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.582 [2024-12-17 00:36:53.568800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.582 [2024-12-17 00:36:53.568854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.582 [2024-12-17 00:36:53.568867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.582 [2024-12-17 00:36:53.584422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.582 [2024-12-17 00:36:53.584507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.582 [2024-12-17 00:36:53.584522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.841 [2024-12-17 00:36:53.600267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b2510) 00:21:07.841 [2024-12-17 00:36:53.600341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.841 [2024-12-17 00:36:53.600357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.841 17394.50 IOPS, 67.95 MiB/s 00:21:07.841 Latency(us) 00:21:07.841 [2024-12-17T00:36:53.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.841 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:07.841 nvme0n1 : 2.01 17407.84 68.00 0.00 0.00 7347.72 6672.76 28240.06 00:21:07.841 [2024-12-17T00:36:53.844Z] =================================================================================================================== 00:21:07.841 [2024-12-17T00:36:53.844Z] Total : 17407.84 68.00 0.00 0.00 7347.72 6672.76 28240.06 00:21:07.841 { 00:21:07.841 "results": [ 00:21:07.841 { 00:21:07.841 "job": "nvme0n1", 00:21:07.841 "core_mask": "0x2", 00:21:07.841 "workload": "randread", 00:21:07.841 "status": "finished", 00:21:07.841 "queue_depth": 128, 00:21:07.841 "io_size": 4096, 00:21:07.841 "runtime": 2.00582, 00:21:07.841 "iops": 17407.8431763568, 00:21:07.841 "mibps": 67.99938740764375, 00:21:07.841 "io_failed": 0, 00:21:07.842 "io_timeout": 0, 00:21:07.842 "avg_latency_us": 7347.717256142489, 00:21:07.842 "min_latency_us": 6672.756363636364, 00:21:07.842 "max_latency_us": 28240.05818181818 00:21:07.842 } 00:21:07.842 ], 00:21:07.842 "core_count": 1 00:21:07.842 } 00:21:07.842 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:07.842 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:07.842 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:07.842 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:07.842 | .driver_specific 00:21:07.842 | .nvme_error 00:21:07.842 | .status_code 00:21:07.842 | .command_transient_transport_error' 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94515 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94515 ']' 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94515 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94515 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:08.101 killing process with pid 94515 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94515' 00:21:08.101 Received shutdown signal, test time was about 2.000000 seconds 00:21:08.101 00:21:08.101 Latency(us) 00:21:08.101 [2024-12-17T00:36:54.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.101 [2024-12-17T00:36:54.104Z] =================================================================================================================== 00:21:08.101 [2024-12-17T00:36:54.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94515 00:21:08.101 00:36:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94515 00:21:08.101 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94571 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94571 /var/tmp/bperf.sock 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94571 ']' 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.102 00:36:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:08.361 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:08.361 Zero copy mechanism will not be used. 00:21:08.361 [2024-12-17 00:36:54.131051] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:08.361 [2024-12-17 00:36:54.131170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94571 ] 00:21:08.361 [2024-12-17 00:36:54.267236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.361 [2024-12-17 00:36:54.301280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.361 [2024-12-17 00:36:54.329178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:09.313 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.313 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:09.313 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:09.313 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:09.581 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:09.581 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.581 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:09.581 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.581 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:09.581 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:09.840 nvme0n1 00:21:09.840 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:09.840 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.840 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:09.840 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.840 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:09.841 00:36:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:09.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:09.841 Zero copy mechanism will not be used. 00:21:09.841 Running I/O for 2 seconds... 00:21:09.841 [2024-12-17 00:36:55.748821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.748900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.748945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.752811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.752880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.752908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.757039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.757091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.761617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.761667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.761694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.766077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.766127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.766155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.770060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.770112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.770140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.774095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.774147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.774190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.778087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.778138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.778166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.782014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.782065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.782093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.785982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.786032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.786059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.789894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.789945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.789973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.794228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.794280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.794308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.798435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.798486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.798514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.803119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.803153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.803181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.807941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.807985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.808014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.812878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.812958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.812986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.817403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.817468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.817481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.821841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.821890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.821919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.826194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.826244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.826272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.830580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.830634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.830678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.834956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.835006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.835034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.839335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.839415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.839445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.841 [2024-12-17 00:36:55.844117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:09.841 [2024-12-17 00:36:55.844159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.841 [2024-12-17 00:36:55.844204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.849054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.849111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.849140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.853410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.853474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.853503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.857718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.857769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.857798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.862320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.862390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.862403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.866749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.866800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.866830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.871227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.871279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.871308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.875811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.875864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.875892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.880167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.880219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.880248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.884483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.884534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.884587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.888878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.888958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.893233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.893284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.893312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.897394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.897445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.897473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.901479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.901529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.901556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.905602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.905653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.905680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.909930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.909981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.910009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.914054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.914105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.914133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.918149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.918200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.918229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.922282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.922345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.922374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.926401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.926452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.926481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.103 [2024-12-17 00:36:55.930653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.103 [2024-12-17 00:36:55.930718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.103 [2024-12-17 00:36:55.930745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.934803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.934854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.934882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.938845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.938896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.938925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.942970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.943021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.943049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.947303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.947364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.947392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.951404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.951453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.951481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.955452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.955503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.955531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.959485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.959534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.959562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.963594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.963678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.963691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.967863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.967915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.967943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.971945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.971997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.972025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.976082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.976134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.976162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.980152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.980203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.980231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.984446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.984483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.984512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.988491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.988542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.988611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.992608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.992646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.992659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:55.996624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:55.996677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:55.996690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.000959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.001011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.001039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.005064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.005115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.005142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.009183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.009235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.009264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.013506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.013557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.013586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.017554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.017605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.017633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.021598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.021649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.021677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.025731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.025781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.025808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.029967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.030018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.030046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.104 [2024-12-17 00:36:56.034521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.104 [2024-12-17 00:36:56.034574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.104 [2024-12-17 00:36:56.034602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.038632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.038711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.043038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.043089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.043117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.047248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.047297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.047352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.051283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.051361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.051390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.055289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.055363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.055375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.059258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.059334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.059348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.063191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.063241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.063270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.067184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.067234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.067262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.071156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.071207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.071235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.075106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.075156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.075183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.079089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.079140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.079168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.083192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.083269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.087182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.087234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.087261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.091177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.091228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.091256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.095136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.095186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.095214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.099139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.099190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.099218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.105 [2024-12-17 00:36:56.103413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.105 [2024-12-17 00:36:56.103478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.105 [2024-12-17 00:36:56.103499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.107804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.107858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.107886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.111863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.111903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.111945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.116006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.116059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.119967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.120018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.120047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.123856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.123906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.123934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.127882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.127934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.127962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.131797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.131847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.131875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.135691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.135740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.135768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.139634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.139683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.139712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.143572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.143622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.143650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.147535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.147584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.147612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.151498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.151548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.151576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.155401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.155450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.155477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.159327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.159389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.367 [2024-12-17 00:36:56.159418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.367 [2024-12-17 00:36:56.163240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.367 [2024-12-17 00:36:56.163290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.163318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.167081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.167132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.167160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.171017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.171067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.171094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.174983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.175033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.175063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.178919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.178968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.178995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.182858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.182908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.182936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.186782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.186832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.186859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.190790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.190850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.190878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.194739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.194789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.194818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.198642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.198694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.198721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.202571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.202623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.202636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.206470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.206521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.206533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.210331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.210379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.210391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.214302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.214376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.214388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.218217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.218267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.218295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.222102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.222153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.222181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.226076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.226126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.226153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.229976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.230026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.230054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.233882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.233932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.233959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.237849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.237901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.237928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.241842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.241893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.241921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.245854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.245904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.245931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.249783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.249834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.249862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.253730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.253779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.253806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.257604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.368 [2024-12-17 00:36:56.257654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.368 [2024-12-17 00:36:56.257681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.368 [2024-12-17 00:36:56.261656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.261705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.261733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.265543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.265592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.265620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.269562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.269610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.269638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.273487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.273536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.273564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.277518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.277567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.277595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.281498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.281548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.281575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.285457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.285506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.285533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.290001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.290050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.290077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.294597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.294658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.294671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.298793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.298843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.298871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.302887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.302937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.302964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.307001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.307050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.307078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.311017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.311067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.311111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.315008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.315058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.315086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.319026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.319077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.319106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.322981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.323031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.323058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.326919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.326969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.326996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.330895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.330945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.330972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.334877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.334927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.334954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.338767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.338817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.338844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.342718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.342768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.342796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.346681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.346730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.346757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.350671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.350720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.350747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.354604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.354653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.354682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.358578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.358626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.369 [2024-12-17 00:36:56.358654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.369 [2024-12-17 00:36:56.362539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.369 [2024-12-17 00:36:56.362588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.370 [2024-12-17 00:36:56.362617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.370 [2024-12-17 00:36:56.366738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.370 [2024-12-17 00:36:56.366791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.370 [2024-12-17 00:36:56.366819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.371256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.371336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.371365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.375218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.375270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.375297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.379529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.379581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.379610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.383617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.383667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.383695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.387591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.387625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.387653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.391580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.391630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.391657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.395477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.395526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.395553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.399372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.399422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.399450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.403233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.631 [2024-12-17 00:36:56.403284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.631 [2024-12-17 00:36:56.403312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.631 [2024-12-17 00:36:56.407149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.407199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.407227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.411095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.411145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.411173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.415054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.415105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.415132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.419026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.419078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.419106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.422994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.423044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.423072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.426945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.426994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.427022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.430901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.430952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.430980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.434813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.434862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.434890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.438759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.438810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.438837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.442765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.442815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.442843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.446672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.446737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.446764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.450621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.450673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.450685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.454495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.454547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.454559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.458364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.458411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.458423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.462234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.462283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.462311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.466148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.466199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.466227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.470070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.470120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.470147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.473983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.474033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.474061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.478109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.478159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.478187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.482112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.482162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.482190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.486088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.486139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.486167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.490083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.490133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.490160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.494019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.632 [2024-12-17 00:36:56.494069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.632 [2024-12-17 00:36:56.494097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.632 [2024-12-17 00:36:56.498015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.498065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.498092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.501992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.502042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.502070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.506053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.506104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.506133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.510077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.510128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.510155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.514095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.514145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.514172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.518068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.518117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.518144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.522038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.522087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.522115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.526060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.526109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.526136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.530046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.530096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.530124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.534067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.534116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.534144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.538048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.538098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.538125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.542024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.542073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.542101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.545960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.546010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.546038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.549905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.549955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.549983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.553833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.553883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.553910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.557754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.557804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.557832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.561735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.561785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.561812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.565690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.565740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.565768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.569600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.569649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.569676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.573594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.573644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.573672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.577626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.577692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.577720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.581502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.581551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.581579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.585470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.585519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.633 [2024-12-17 00:36:56.585547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.633 [2024-12-17 00:36:56.589328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.633 [2024-12-17 00:36:56.589388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.589432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.593190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.593239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.593267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.597126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.597177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.601110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.601160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.601187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.605088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.605139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.605166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.609075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.609124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.609151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.613066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.613116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.613144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.616993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.617043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.617070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.620977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.621026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.621054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.624875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.624925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.624967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.628811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.628862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.628890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.634 [2024-12-17 00:36:56.633247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.634 [2024-12-17 00:36:56.633300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.634 [2024-12-17 00:36:56.633341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.637448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.637500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.637528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.641661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.641740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.641768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.645676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.645728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.645755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.649562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.649612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.649639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.653568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.653618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.653646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.657527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.657576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.657604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.661542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.661592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.661619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.665502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.665551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.665578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.669423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.669473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.669500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.673275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.673351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.673364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.677142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.677192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.677220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.681184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.681234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.681262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.685063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.685113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.685140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.689111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.689161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.689189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.693091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.896 [2024-12-17 00:36:56.693140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.896 [2024-12-17 00:36:56.693168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.896 [2024-12-17 00:36:56.696999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.697049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.697076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.700956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.701006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.701034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.704811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.704861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.704889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.708685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.708737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.708749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.712740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.712794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.712807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.716668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.716721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.720520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.720592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.720606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.724288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.724349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.724362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.728164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.728215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.728242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.732147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.732197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.732225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.736060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.736111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.736139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.739984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.740034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.740062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.743870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.743920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.743947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.897 7626.00 IOPS, 953.25 MiB/s [2024-12-17T00:36:56.900Z] [2024-12-17 00:36:56.748974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.749024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.749053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.751791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.751840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.751868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.754671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.754738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.754765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.757562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.757612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.757639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.760199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.760248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.760276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.763499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.763550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.763561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.766188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.766239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.766266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.769120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.769170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.769198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.771750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.771800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.771827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.774967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.775016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.775044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.777912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.777961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.777989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.780971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.897 [2024-12-17 00:36:56.781021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.897 [2024-12-17 00:36:56.781048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.897 [2024-12-17 00:36:56.783958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.784007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.784034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.786919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.786969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.786997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.790007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.790057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.790086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.793114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.793164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.793192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.795767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.795816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.795844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.798778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.798827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.798854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.801390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.801419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.801446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.804403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.804453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.804465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.807072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.807122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.807150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.810423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.810458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.810485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.813436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.813486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.813514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.817176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.817227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.817255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.821269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.821344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.821360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.826217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.826268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.826296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.829498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.829551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.829580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.833128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.833180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.833209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.837369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.837435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.837450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.840112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.840161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.840189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.844221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.844270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.844298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.846968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.847017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.847045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.850737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.850787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.850815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.853578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.853615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.853646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.857372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.857437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.857467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.860331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.860409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.860423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.864103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.898 [2024-12-17 00:36:56.864152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.898 [2024-12-17 00:36:56.864180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.898 [2024-12-17 00:36:56.867187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.867238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.867266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.870097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.870145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.870173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.874075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.874126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.874154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.876759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.876800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.876813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.880520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.880596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.880610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.883042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.883092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.883119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.886722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.886772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.886800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.889195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.889244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.893093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.893142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.893170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.899 [2024-12-17 00:36:56.897465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:10.899 [2024-12-17 00:36:56.897518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.899 [2024-12-17 00:36:56.897546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.160 [2024-12-17 00:36:56.901753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.160 [2024-12-17 00:36:56.901806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.160 [2024-12-17 00:36:56.901834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.160 [2024-12-17 00:36:56.905771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.160 [2024-12-17 00:36:56.905823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.160 [2024-12-17 00:36:56.905851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.160 [2024-12-17 00:36:56.909885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.160 [2024-12-17 00:36:56.909938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.160 [2024-12-17 00:36:56.909966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.160 [2024-12-17 00:36:56.913893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.913944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.913972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.917907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.917957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.917985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.921931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.921982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.922009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.925916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.925966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.925994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.929964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.930016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.930044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.933975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.934026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.934054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.937946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.937996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.938024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.941871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.941921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.941949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.945768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.945818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.945846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.949692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.949742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.949769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.953580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.953630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.953658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.957544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.957593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.957620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.961551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.961601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.961628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.965467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.965515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.965542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.969356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.969418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.969445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.973319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.973379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.973409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.977538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.977587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.977615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.981516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.981565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.981592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.985413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.985461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.985489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.989335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.989397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.989425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.993268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.993344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.993358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:56.997233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:56.997297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:56.997324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:57.001320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:57.001381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:57.001409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:57.005262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:57.005337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:57.005351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:57.009236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.161 [2024-12-17 00:36:57.009286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.161 [2024-12-17 00:36:57.009313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.161 [2024-12-17 00:36:57.013221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.013271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.013299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.017169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.017219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.017247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.021163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.021213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.021241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.025218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.025269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.025296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.029288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.029363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.029376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.033213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.033263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.033291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.037181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.037230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.037258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.041105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.041155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.041182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.045433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.045497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.045510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.049888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.049939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.049967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.054191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.054242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.054269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.058539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.058592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.058620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.063250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.063302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.063359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.067898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.067961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.067990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.072372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.072436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.072466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.076934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.076999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.077028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.081175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.081227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.081269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.085456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.085507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.085535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.089938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.090005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.090033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.094174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.094225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.094253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.098245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.098296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.098335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.102583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.102619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.102647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.106579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.106630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.106658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.110670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.110749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.114662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.114713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.162 [2024-12-17 00:36:57.114741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.162 [2024-12-17 00:36:57.118949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.162 [2024-12-17 00:36:57.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.119028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.123091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.123142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.123170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.127206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.127258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.127286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.131497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.131542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.131555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.135518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.135569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.135597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.139549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.139599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.139627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.143588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.143622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.143650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.147917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.147969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.147996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.151985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.152035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.152064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.156076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.156126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.156154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.163 [2024-12-17 00:36:57.160610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.163 [2024-12-17 00:36:57.160652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.163 [2024-12-17 00:36:57.160666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.165085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.165168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.169161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.169212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.169241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.173566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.173619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.173647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.177914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.177967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.177995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.182081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.182132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.182161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.186113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.186164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.186192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.190375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.190460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.190474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.194481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.194533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.194561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.198464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.198515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.198543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.202567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.202619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.202646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.206745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.206796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.206825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.210875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.210926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.210954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.214995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.215046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.215075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.219298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.219393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.219422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.223481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.223533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.223561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.227630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.227696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.227723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.231717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.231767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.231795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.235857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.235907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.239888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.239939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.239967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.243892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.243942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.243970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.247893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.247942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.247970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.251772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.251821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.251849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.255682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.255733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.425 [2024-12-17 00:36:57.255760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.425 [2024-12-17 00:36:57.259698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.425 [2024-12-17 00:36:57.259751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.259764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.263686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.263736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.263764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.267629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.267679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.267706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.271511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.271560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.271588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.275440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.275489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.275517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.279385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.279434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.279462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.283333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.283381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.283408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.287225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.287275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.287304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.291128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.291178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.291205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.295130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.295179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.295207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.299078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.299127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.299155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.302976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.303026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.303054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.306996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.307046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.307074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.311015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.311065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.311093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.315068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.315119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.315147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.319006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.319055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.319083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.323011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.323062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.323090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.326947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.326998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.327025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.330915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.330965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.330993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.334900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.334951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.334979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.338892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.338943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.338970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.342811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.342861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.342890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.346871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.346920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.346948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.351358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.351406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.355847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.355897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.355924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.426 [2024-12-17 00:36:57.359749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.426 [2024-12-17 00:36:57.359799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.426 [2024-12-17 00:36:57.359826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.363791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.363840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.363868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.367763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.367813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.367841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.371704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.371754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.371781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.375630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.375678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.375706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.379609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.379658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.379686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.383556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.383605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.383633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.387503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.387552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.387580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.391387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.391436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.391464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.395260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.395336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.395349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.399210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.399260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.399288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.403131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.403180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.403207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.407112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.407162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.407190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.411060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.411109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.411137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.415081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.415131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.415158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.418996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.419045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.419072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.427 [2024-12-17 00:36:57.423180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.427 [2024-12-17 00:36:57.423251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.427 [2024-12-17 00:36:57.423280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.427760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.427828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.427850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.431840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.431893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.431922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.436061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.436114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.436143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.439999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.440051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.440078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.444018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.444069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.444097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.447914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.447964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.447992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.451859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.451909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.451937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.455732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.455782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.455810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.459718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.459769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.459797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.463737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.463787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.463815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.467697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.467747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.688 [2024-12-17 00:36:57.467775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.688 [2024-12-17 00:36:57.471600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.688 [2024-12-17 00:36:57.471650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.471678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.475545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.475595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.475623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.479503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.479554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.479581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.483524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.483573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.483601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.487405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.487455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.487482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.491312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.491373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.491401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.495238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.495287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.495315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.499214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.499265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.499293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.503166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.503216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.503244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.507227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.507277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.507304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.511161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.511211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.511239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.515196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.515245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.515272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.519189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.519239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.519267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.523219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.523269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.523296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.527164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.527214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.527241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.531191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.531240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.531268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.535143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.535193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.535221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.539106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.539156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.539184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.542999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.543049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.543076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.546925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.546975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.547003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.550829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.550878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.550906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.554814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.554864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.689 [2024-12-17 00:36:57.554892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.689 [2024-12-17 00:36:57.558700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.689 [2024-12-17 00:36:57.558765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.558792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.562621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.562674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.562686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.566656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.566723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.566735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.570577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.570629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.570641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.574444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.574495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.574508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.578423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.578489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.578501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.582287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.582362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.582376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.586220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.586269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.586296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.590121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.590171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.590199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.594092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.594142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.594170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.598037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.598087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.598115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.601959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.602008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.602036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.605908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.605958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.605986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.609877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.609926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.609954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.613820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.613871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.613898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.617812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.617863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.617891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.621727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.621778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.621806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.625629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.625678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.625706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.629519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.629568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.629596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.633511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.633561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.633589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.637429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.637479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.637507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.641275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.641350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.641364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.645188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.645237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.645265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.649228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.649280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.649308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.653108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.653158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.653185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.690 [2024-12-17 00:36:57.657056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.690 [2024-12-17 00:36:57.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.690 [2024-12-17 00:36:57.657133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.660922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.660988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.661016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.664801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.664851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.664879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.668689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.668756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.668768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.672617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.672655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.672668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.676358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.676405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.676416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.680156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.680206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.680234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.684075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.684125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.684153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.691 [2024-12-17 00:36:57.688293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.691 [2024-12-17 00:36:57.688371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.691 [2024-12-17 00:36:57.688384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.950 [2024-12-17 00:36:57.692796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.950 [2024-12-17 00:36:57.692839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.950 [2024-12-17 00:36:57.692852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.950 [2024-12-17 00:36:57.696684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.696738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.696751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.701020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.701072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.701100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.704944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.704994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.705023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.708899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.708979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.709006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.712905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.712972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.713000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.716772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.716825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.716867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.720737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.720791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.720803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.724559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.724626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.724638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.728387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.728434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.728446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.732201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.732251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.732278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.736043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.736092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.736120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.739926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.739976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.740004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.951 [2024-12-17 00:36:57.743814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb5f50) 00:21:11.951 [2024-12-17 00:36:57.743863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.951 [2024-12-17 00:36:57.743891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.951 7798.00 IOPS, 974.75 MiB/s 00:21:11.951 Latency(us) 00:21:11.951 [2024-12-17T00:36:57.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.951 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:11.951 nvme0n1 : 2.00 7793.65 974.21 0.00 0.00 2049.96 808.03 4885.41 00:21:11.951 [2024-12-17T00:36:57.954Z] =================================================================================================================== 00:21:11.951 [2024-12-17T00:36:57.954Z] Total : 7793.65 974.21 0.00 0.00 2049.96 808.03 4885.41 00:21:11.951 { 00:21:11.951 "results": [ 00:21:11.951 { 00:21:11.951 "job": "nvme0n1", 00:21:11.951 "core_mask": "0x2", 00:21:11.951 "workload": "randread", 00:21:11.951 "status": "finished", 00:21:11.951 "queue_depth": 16, 00:21:11.951 "io_size": 131072, 00:21:11.951 "runtime": 2.00317, 00:21:11.951 "iops": 7793.647069395009, 00:21:11.951 "mibps": 974.2058836743761, 00:21:11.951 "io_failed": 0, 00:21:11.951 "io_timeout": 0, 00:21:11.951 "avg_latency_us": 2049.9587729718396, 00:21:11.951 "min_latency_us": 808.0290909090909, 00:21:11.951 "max_latency_us": 4885.410909090909 00:21:11.951 } 00:21:11.951 ], 00:21:11.951 "core_count": 1 00:21:11.951 } 00:21:11.951 00:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:11.951 00:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:11.951 00:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:11.951 00:36:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:11.951 | .driver_specific 00:21:11.951 | .nvme_error 00:21:11.951 | .status_code 00:21:11.951 | .command_transient_transport_error' 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 503 > 0 )) 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94571 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94571 ']' 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94571 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94571 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:12.210 killing process with pid 94571 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94571' 00:21:12.210 Received shutdown signal, test time was about 2.000000 seconds 00:21:12.210 00:21:12.210 Latency(us) 00:21:12.210 [2024-12-17T00:36:58.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.210 [2024-12-17T00:36:58.213Z] =================================================================================================================== 00:21:12.210 [2024-12-17T00:36:58.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94571 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94571 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94627 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94627 /var/tmp/bperf.sock 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94627 ']' 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.210 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.469 [2024-12-17 00:36:58.259579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:12.469 [2024-12-17 00:36:58.259691] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94627 ] 00:21:12.469 [2024-12-17 00:36:58.397054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.469 [2024-12-17 00:36:58.431743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.469 [2024-12-17 00:36:58.460379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:12.728 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.728 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:12.728 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:12.728 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:12.987 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:12.987 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.987 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.987 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.987 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:12.987 00:36:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:13.246 nvme0n1 00:21:13.246 00:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:13.246 00:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.246 00:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:13.246 00:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.246 00:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:13.246 00:36:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:13.246 Running I/O for 2 seconds... 00:21:13.246 [2024-12-17 00:36:59.195783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fef90 00:21:13.246 [2024-12-17 00:36:59.198083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.246 [2024-12-17 00:36:59.198139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.246 [2024-12-17 00:36:59.209393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198feb58 00:21:13.246 [2024-12-17 00:36:59.211647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.246 [2024-12-17 00:36:59.211699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:13.246 [2024-12-17 00:36:59.222842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fe2e8 00:21:13.246 [2024-12-17 00:36:59.225074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.246 [2024-12-17 00:36:59.225123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:13.246 [2024-12-17 00:36:59.236145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fda78 00:21:13.246 [2024-12-17 00:36:59.238405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.246 [2024-12-17 00:36:59.238453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:13.246 [2024-12-17 00:36:59.250296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fd208 00:21:13.505 [2024-12-17 00:36:59.252681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.505 [2024-12-17 00:36:59.252723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:13.505 [2024-12-17 00:36:59.264254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fc998 00:21:13.505 [2024-12-17 00:36:59.266419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.505 [2024-12-17 00:36:59.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:13.505 [2024-12-17 00:36:59.277898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fc128 00:21:13.505 [2024-12-17 00:36:59.279979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.505 [2024-12-17 00:36:59.280028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:13.505 [2024-12-17 00:36:59.291424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fb8b8 00:21:13.505 [2024-12-17 00:36:59.293593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.505 [2024-12-17 00:36:59.293642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.304969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fb048 00:21:13.506 [2024-12-17 00:36:59.307074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.307120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.318360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fa7d8 00:21:13.506 [2024-12-17 00:36:59.320405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.320454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.331830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f9f68 00:21:13.506 [2024-12-17 00:36:59.334002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.334051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.345273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f96f8 00:21:13.506 [2024-12-17 00:36:59.347268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.347337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.358555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f8e88 00:21:13.506 [2024-12-17 00:36:59.360538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.360607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.371788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f8618 00:21:13.506 [2024-12-17 00:36:59.373775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.373823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.385121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f7da8 00:21:13.506 [2024-12-17 00:36:59.387160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.387206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.398468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f7538 00:21:13.506 [2024-12-17 00:36:59.400437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.400486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.411677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f6cc8 00:21:13.506 [2024-12-17 00:36:59.413659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.413707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.425119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f6458 00:21:13.506 [2024-12-17 00:36:59.427127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.427172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.438735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f5be8 00:21:13.506 [2024-12-17 00:36:59.440939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.441001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.454125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f5378 00:21:13.506 [2024-12-17 00:36:59.456394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.456451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.469952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f4b08 00:21:13.506 [2024-12-17 00:36:59.472017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.472065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.484918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f4298 00:21:13.506 [2024-12-17 00:36:59.486949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.486997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:13.506 [2024-12-17 00:36:59.499211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f3a28 00:21:13.506 [2024-12-17 00:36:59.501191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.506 [2024-12-17 00:36:59.501239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:13.765 [2024-12-17 00:36:59.514839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f31b8 00:21:13.765 [2024-12-17 00:36:59.516847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.765 [2024-12-17 00:36:59.516916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:13.765 [2024-12-17 00:36:59.529338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f2948 00:21:13.765 [2024-12-17 00:36:59.531254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.765 [2024-12-17 00:36:59.531304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:13.765 [2024-12-17 00:36:59.543604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f20d8 00:21:13.765 [2024-12-17 00:36:59.545509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.765 [2024-12-17 00:36:59.545560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:13.765 [2024-12-17 00:36:59.557934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f1868 00:21:13.765 [2024-12-17 00:36:59.559819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.559867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.572129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f0ff8 00:21:13.766 [2024-12-17 00:36:59.574011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.574058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.586632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f0788 00:21:13.766 [2024-12-17 00:36:59.588460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.588508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.600788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eff18 00:21:13.766 [2024-12-17 00:36:59.602646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.602709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.615014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ef6a8 00:21:13.766 [2024-12-17 00:36:59.617009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.617058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.629588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eee38 00:21:13.766 [2024-12-17 00:36:59.631369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.631418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.643550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ee5c8 00:21:13.766 [2024-12-17 00:36:59.645322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.645379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.657049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198edd58 00:21:13.766 [2024-12-17 00:36:59.658769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.658815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.670433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ed4e8 00:21:13.766 [2024-12-17 00:36:59.672073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.672121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.683842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ecc78 00:21:13.766 [2024-12-17 00:36:59.685584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.685633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.697289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ec408 00:21:13.766 [2024-12-17 00:36:59.698975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.699022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.710753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ebb98 00:21:13.766 [2024-12-17 00:36:59.712352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.712405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.724239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eb328 00:21:13.766 [2024-12-17 00:36:59.725906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.725952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.737742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eaab8 00:21:13.766 [2024-12-17 00:36:59.739314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.739384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.751217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ea248 00:21:13.766 [2024-12-17 00:36:59.752903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.752964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:13.766 [2024-12-17 00:36:59.764725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e99d8 00:21:13.766 [2024-12-17 00:36:59.766429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:13.766 [2024-12-17 00:36:59.766494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.779101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e9168 00:21:14.027 [2024-12-17 00:36:59.780723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.780762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.792536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e88f8 00:21:14.027 [2024-12-17 00:36:59.794081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.794129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.806012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e8088 00:21:14.027 [2024-12-17 00:36:59.807593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.807642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.819369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e7818 00:21:14.027 [2024-12-17 00:36:59.820897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.820946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.832832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e6fa8 00:21:14.027 [2024-12-17 00:36:59.834426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.834455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.846330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e6738 00:21:14.027 [2024-12-17 00:36:59.847796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.847845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.859619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e5ec8 00:21:14.027 [2024-12-17 00:36:59.861189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.861237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.875229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e5658 00:21:14.027 [2024-12-17 00:36:59.876720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.876770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.889630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e4de8 00:21:14.027 [2024-12-17 00:36:59.891155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.891203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.905589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e4578 00:21:14.027 [2024-12-17 00:36:59.907204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.907252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.920994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e3d08 00:21:14.027 [2024-12-17 00:36:59.922530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.922580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.935568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e3498 00:21:14.027 [2024-12-17 00:36:59.937104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.937152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.949838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e2c28 00:21:14.027 [2024-12-17 00:36:59.951188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.951234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.963236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e23b8 00:21:14.027 [2024-12-17 00:36:59.964622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.964656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.976675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e1b48 00:21:14.027 [2024-12-17 00:36:59.978128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.978176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:36:59.990476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e12d8 00:21:14.027 [2024-12-17 00:36:59.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:36:59.991811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:37:00.004745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e0a68 00:21:14.027 [2024-12-17 00:37:00.006577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:37:00.006639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:14.027 [2024-12-17 00:37:00.020180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e01f8 00:21:14.027 [2024-12-17 00:37:00.022008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.027 [2024-12-17 00:37:00.022075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.036700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198df988 00:21:14.287 [2024-12-17 00:37:00.037967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.038019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.050861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198df118 00:21:14.287 [2024-12-17 00:37:00.052131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.052181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.064427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198de8a8 00:21:14.287 [2024-12-17 00:37:00.065687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.065737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.077931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198de038 00:21:14.287 [2024-12-17 00:37:00.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.079129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.096744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198de038 00:21:14.287 [2024-12-17 00:37:00.098865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.098913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.110052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198de8a8 00:21:14.287 [2024-12-17 00:37:00.112189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.112236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.123323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198df118 00:21:14.287 [2024-12-17 00:37:00.125513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.125562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.137132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198df988 00:21:14.287 [2024-12-17 00:37:00.139274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.139348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.150434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e01f8 00:21:14.287 [2024-12-17 00:37:00.152460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.152510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.163579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e0a68 00:21:14.287 [2024-12-17 00:37:00.165720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.165768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.176854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e12d8 00:21:14.287 [2024-12-17 00:37:00.179719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.179767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:14.287 18092.00 IOPS, 70.67 MiB/s [2024-12-17T00:37:00.290Z] [2024-12-17 00:37:00.191141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e1b48 00:21:14.287 [2024-12-17 00:37:00.193270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.193360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.204413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e23b8 00:21:14.287 [2024-12-17 00:37:00.206468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.206518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:14.287 [2024-12-17 00:37:00.217835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e2c28 00:21:14.287 [2024-12-17 00:37:00.219853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.287 [2024-12-17 00:37:00.219900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:14.288 [2024-12-17 00:37:00.231064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e3498 00:21:14.288 [2024-12-17 00:37:00.233155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.288 [2024-12-17 00:37:00.233201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:14.288 [2024-12-17 00:37:00.244489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e3d08 00:21:14.288 [2024-12-17 00:37:00.246510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.288 [2024-12-17 00:37:00.246559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:14.288 [2024-12-17 00:37:00.257653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e4578 00:21:14.288 [2024-12-17 00:37:00.259608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.288 [2024-12-17 00:37:00.259657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:14.288 [2024-12-17 00:37:00.270869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e4de8 00:21:14.288 [2024-12-17 00:37:00.272857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.288 [2024-12-17 00:37:00.272905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:14.288 [2024-12-17 00:37:00.284266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e5658 00:21:14.288 [2024-12-17 00:37:00.286306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.288 [2024-12-17 00:37:00.286360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.298853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e5ec8 00:21:14.547 [2024-12-17 00:37:00.300824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.300922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.312157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e6738 00:21:14.547 [2024-12-17 00:37:00.314145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.314193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.325637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e6fa8 00:21:14.547 [2024-12-17 00:37:00.327468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.327515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.339150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e7818 00:21:14.547 [2024-12-17 00:37:00.341159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.341206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.352478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e8088 00:21:14.547 [2024-12-17 00:37:00.354455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.354507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.365882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e88f8 00:21:14.547 [2024-12-17 00:37:00.367729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.367776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.379086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e9168 00:21:14.547 [2024-12-17 00:37:00.381011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.381056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.392418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198e99d8 00:21:14.547 [2024-12-17 00:37:00.394257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.394304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.405511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ea248 00:21:14.547 [2024-12-17 00:37:00.407253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.407301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.418617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eaab8 00:21:14.547 [2024-12-17 00:37:00.420369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.420414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.431573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eb328 00:21:14.547 [2024-12-17 00:37:00.433382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.433439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.444849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ebb98 00:21:14.547 [2024-12-17 00:37:00.446619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.446668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.458043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ec408 00:21:14.547 [2024-12-17 00:37:00.459829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.459876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:14.547 [2024-12-17 00:37:00.471358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ecc78 00:21:14.547 [2024-12-17 00:37:00.473131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.547 [2024-12-17 00:37:00.473179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:14.548 [2024-12-17 00:37:00.484737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ed4e8 00:21:14.548 [2024-12-17 00:37:00.486439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.548 [2024-12-17 00:37:00.486486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:14.548 [2024-12-17 00:37:00.497922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198edd58 00:21:14.548 [2024-12-17 00:37:00.499568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.548 [2024-12-17 00:37:00.499614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:14.548 [2024-12-17 00:37:00.511416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ee5c8 00:21:14.548 [2024-12-17 00:37:00.513192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.548 [2024-12-17 00:37:00.513238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:14.548 [2024-12-17 00:37:00.524824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eee38 00:21:14.548 [2024-12-17 00:37:00.526525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.548 [2024-12-17 00:37:00.526574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:14.548 [2024-12-17 00:37:00.538016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198ef6a8 00:21:14.548 [2024-12-17 00:37:00.539662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.548 [2024-12-17 00:37:00.539693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.552001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198eff18 00:21:14.807 [2024-12-17 00:37:00.553894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.553943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.566118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f0788 00:21:14.807 [2024-12-17 00:37:00.567782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.567831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.579568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f0ff8 00:21:14.807 [2024-12-17 00:37:00.581240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.581289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.593716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f1868 00:21:14.807 [2024-12-17 00:37:00.595769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.595817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.608240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f20d8 00:21:14.807 [2024-12-17 00:37:00.609849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.609896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.622278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f2948 00:21:14.807 [2024-12-17 00:37:00.624007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.624055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.637475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f31b8 00:21:14.807 [2024-12-17 00:37:00.639238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.639288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.653122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f3a28 00:21:14.807 [2024-12-17 00:37:00.654845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.654891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.667678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f4298 00:21:14.807 [2024-12-17 00:37:00.669305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.669376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.681967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f4b08 00:21:14.807 [2024-12-17 00:37:00.683562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.683594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.696275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f5378 00:21:14.807 [2024-12-17 00:37:00.697864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.697913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.710637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f5be8 00:21:14.807 [2024-12-17 00:37:00.712166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.807 [2024-12-17 00:37:00.712216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:14.807 [2024-12-17 00:37:00.725195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f6458 00:21:14.808 [2024-12-17 00:37:00.726746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.808 [2024-12-17 00:37:00.726807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:14.808 [2024-12-17 00:37:00.739544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f6cc8 00:21:14.808 [2024-12-17 00:37:00.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.808 [2024-12-17 00:37:00.741166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:14.808 [2024-12-17 00:37:00.754032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f7538 00:21:14.808 [2024-12-17 00:37:00.755566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.808 [2024-12-17 00:37:00.755601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:14.808 [2024-12-17 00:37:00.768303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f7da8 00:21:14.808 [2024-12-17 00:37:00.769822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.808 [2024-12-17 00:37:00.769870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:14.808 [2024-12-17 00:37:00.782610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f8618 00:21:14.808 [2024-12-17 00:37:00.784059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.808 [2024-12-17 00:37:00.784106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:14.808 [2024-12-17 00:37:00.796966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f8e88 00:21:14.808 [2024-12-17 00:37:00.798370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.808 [2024-12-17 00:37:00.798421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:14.808 [2024-12-17 00:37:00.811009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f96f8 00:21:15.067 [2024-12-17 00:37:00.812530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.812604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.824899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f9f68 00:21:15.067 [2024-12-17 00:37:00.826269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.826346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.838452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fa7d8 00:21:15.067 [2024-12-17 00:37:00.839791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.839838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.851876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fb048 00:21:15.067 [2024-12-17 00:37:00.853250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.853297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.865302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fb8b8 00:21:15.067 [2024-12-17 00:37:00.866646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.866678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.878854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fc128 00:21:15.067 [2024-12-17 00:37:00.880147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.880194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.892411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fc998 00:21:15.067 [2024-12-17 00:37:00.893746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.893793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.905780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fd208 00:21:15.067 [2024-12-17 00:37:00.907045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.907092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.920846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fda78 00:21:15.067 [2024-12-17 00:37:00.922123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.922151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.936611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fe2e8 00:21:15.067 [2024-12-17 00:37:00.937987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.938036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.951333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198feb58 00:21:15.067 [2024-12-17 00:37:00.952736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.067 [2024-12-17 00:37:00.952771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:15.067 [2024-12-17 00:37:00.970627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fef90 00:21:15.068 [2024-12-17 00:37:00.972897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:00.972947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.068 [2024-12-17 00:37:00.983976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198feb58 00:21:15.068 [2024-12-17 00:37:00.986314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:00.986368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:15.068 [2024-12-17 00:37:00.997911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fe2e8 00:21:15.068 [2024-12-17 00:37:01.000177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:01.000224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:15.068 [2024-12-17 00:37:01.011392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fda78 00:21:15.068 [2024-12-17 00:37:01.013671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:01.013734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:15.068 [2024-12-17 00:37:01.025254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fd208 00:21:15.068 [2024-12-17 00:37:01.027416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:01.027463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:15.068 [2024-12-17 00:37:01.038784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fc998 00:21:15.068 [2024-12-17 00:37:01.041073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:01.041121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:15.068 [2024-12-17 00:37:01.052437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fc128 00:21:15.068 [2024-12-17 00:37:01.054616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:01.054666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:15.068 [2024-12-17 00:37:01.066179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fb8b8 00:21:15.068 [2024-12-17 00:37:01.068536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.068 [2024-12-17 00:37:01.068610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:15.327 [2024-12-17 00:37:01.080643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fb048 00:21:15.327 [2024-12-17 00:37:01.082763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.082815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:15.328 [2024-12-17 00:37:01.094145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198fa7d8 00:21:15.328 [2024-12-17 00:37:01.096288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.096344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:15.328 [2024-12-17 00:37:01.107701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f9f68 00:21:15.328 [2024-12-17 00:37:01.109831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.109879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:15.328 [2024-12-17 00:37:01.121406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f96f8 00:21:15.328 [2024-12-17 00:37:01.123430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.123479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:15.328 [2024-12-17 00:37:01.134757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f8e88 00:21:15.328 [2024-12-17 00:37:01.136923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.136984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:15.328 [2024-12-17 00:37:01.148489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f8618 00:21:15.328 [2024-12-17 00:37:01.151029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.151076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:15.328 [2024-12-17 00:37:01.163340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f7da8 00:21:15.328 [2024-12-17 00:37:01.165445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.165491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.328 [2024-12-17 00:37:01.176840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36210) with pdu=0x2000198f7538 00:21:15.328 [2024-12-17 00:37:01.179694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.328 [2024-12-17 00:37:01.179757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:15.328 18217.50 IOPS, 71.16 MiB/s 00:21:15.328 Latency(us) 00:21:15.328 [2024-12-17T00:37:01.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.328 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:15.328 nvme0n1 : 2.01 18259.19 71.32 0.00 0.00 7004.60 1980.97 25618.62 00:21:15.328 [2024-12-17T00:37:01.331Z] =================================================================================================================== 00:21:15.328 [2024-12-17T00:37:01.331Z] Total : 18259.19 71.32 0.00 0.00 7004.60 1980.97 25618.62 00:21:15.328 { 00:21:15.328 "results": [ 00:21:15.328 { 00:21:15.328 "job": "nvme0n1", 00:21:15.328 "core_mask": "0x2", 00:21:15.328 "workload": "randwrite", 00:21:15.328 "status": "finished", 00:21:15.328 "queue_depth": 128, 00:21:15.328 "io_size": 4096, 00:21:15.328 "runtime": 2.009344, 00:21:15.328 "iops": 18259.193050070073, 00:21:15.328 "mibps": 71.32497285183622, 00:21:15.328 "io_failed": 0, 00:21:15.328 "io_timeout": 0, 00:21:15.328 "avg_latency_us": 7004.603012148799, 00:21:15.328 "min_latency_us": 1980.9745454545455, 00:21:15.328 "max_latency_us": 25618.618181818183 00:21:15.328 } 00:21:15.328 ], 00:21:15.328 "core_count": 1 00:21:15.328 } 00:21:15.328 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:15.328 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:15.328 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:15.328 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:15.328 | .driver_specific 00:21:15.328 | .nvme_error 00:21:15.328 | .status_code 00:21:15.328 | .command_transient_transport_error' 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94627 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94627 ']' 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94627 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94627 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:15.587 killing process with pid 94627 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94627' 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94627 00:21:15.587 Received shutdown signal, test time was about 2.000000 seconds 00:21:15.587 00:21:15.587 Latency(us) 00:21:15.587 [2024-12-17T00:37:01.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.587 [2024-12-17T00:37:01.590Z] =================================================================================================================== 00:21:15.587 [2024-12-17T00:37:01.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.587 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94627 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94674 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94674 /var/tmp/bperf.sock 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94674 ']' 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.846 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:15.846 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:15.846 Zero copy mechanism will not be used. 00:21:15.846 [2024-12-17 00:37:01.712817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:15.846 [2024-12-17 00:37:01.712926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94674 ] 00:21:15.846 [2024-12-17 00:37:01.848499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.106 [2024-12-17 00:37:01.883366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.106 [2024-12-17 00:37:01.912146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:16.106 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.106 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:16.106 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:16.106 00:37:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:16.364 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:16.364 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.364 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:16.364 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.364 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:16.364 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:16.623 nvme0n1 00:21:16.623 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:16.623 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.623 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:16.624 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.624 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:16.624 00:37:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:16.883 Zero copy mechanism will not be used. 00:21:16.883 Running I/O for 2 seconds... 00:21:16.883 [2024-12-17 00:37:02.640528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.640889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.640957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.645414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.645713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.645741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.650129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.650454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.650477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.654834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.655143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.655174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.659436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.659755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.659783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.664115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.664437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.664459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.668809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.669123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.669151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.673454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.673768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.673798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.678097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.678428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.678452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.682697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.683018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.683061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.687268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.687600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.687638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.691790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.692108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.692136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.696271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.696627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.696668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.700842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.701199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.701231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.705526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.705847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.705887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.710065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.710394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.710417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.714720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.883 [2024-12-17 00:37:02.715033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.883 [2024-12-17 00:37:02.715060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.883 [2024-12-17 00:37:02.719424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.719722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.719749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.724162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.724507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.724531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.728929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.729253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.729281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.733609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.733926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.733954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.738196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.738573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.742985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.743307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.743350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.747549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.747870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.747913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.752161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.752491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.752518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.756855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.757192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.757224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.761616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.761926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.761955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.766283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.766590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.766621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.770851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.771160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.771188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.775511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.775823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.775851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.780217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.780538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.780590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.784948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.785267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.785297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.789605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.789924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.789952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.794200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.794528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.794556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.798802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.799123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.799162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.803333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.803652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.803679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.807876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.808192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.808221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.812398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.812748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.812777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.817191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.817516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.817542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.821915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.822225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.822254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.826582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.826890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.826917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.831227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.884 [2024-12-17 00:37:02.831551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.884 [2024-12-17 00:37:02.831581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.884 [2024-12-17 00:37:02.836075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.836414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.836440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.842154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.842504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.842527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.847485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.847805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.847849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.852022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.852339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.852375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.856548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.856886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.856928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.861137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.861465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.861488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.865702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.866020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.866048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.870314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.870643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.870666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.875002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.875319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.875355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.879537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.879844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.879874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.885 [2024-12-17 00:37:02.884374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:16.885 [2024-12-17 00:37:02.884716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.885 [2024-12-17 00:37:02.884746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.145 [2024-12-17 00:37:02.889575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.145 [2024-12-17 00:37:02.889880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.145 [2024-12-17 00:37:02.889908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.145 [2024-12-17 00:37:02.894372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.145 [2024-12-17 00:37:02.894666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.145 [2024-12-17 00:37:02.894705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.145 [2024-12-17 00:37:02.899040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.145 [2024-12-17 00:37:02.899358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.145 [2024-12-17 00:37:02.899397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.145 [2024-12-17 00:37:02.903595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.145 [2024-12-17 00:37:02.903912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.145 [2024-12-17 00:37:02.903942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.145 [2024-12-17 00:37:02.908158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.145 [2024-12-17 00:37:02.908486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.145 [2024-12-17 00:37:02.908513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.145 [2024-12-17 00:37:02.912727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.145 [2024-12-17 00:37:02.913068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.913105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.917362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.917692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.917725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.921906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.922225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.922253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.926569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.926889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.926929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.931049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.931383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.931413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.935611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.935927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.935957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.940214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.940544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.940607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.944835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.945154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.945186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.949408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.949725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.949766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.954067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.954397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.954418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.959015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.959343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.959382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.964200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.964600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.964637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.969640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.970005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.970039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.975054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.975372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.975413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.980143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.980499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.980529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.985372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.985782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.985816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.990596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.990930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.990961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:02.995506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:02.995855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:02.995889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.000399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:03.000761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:03.000791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.005205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:03.005532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:03.005558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.009884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:03.010192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:03.010221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.014564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:03.014855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:03.014885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.019379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:03.019762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:03.019796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.024473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:03.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:03.024869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.029484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.146 [2024-12-17 00:37:03.029819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.146 [2024-12-17 00:37:03.029853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.146 [2024-12-17 00:37:03.034733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.035097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.035134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.040083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.040452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.040486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.045495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.045868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.045901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.050601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.050923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.050953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.055620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.055951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.055981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.060701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.061062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.061098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.065447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.065765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.065793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.070169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.070620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.070658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.075338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.075724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.075773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.080394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.080754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.080796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.085291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.085646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.085680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.090238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.090625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.090661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.095025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.095345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.095386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.099791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.100129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.100160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.104443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.104833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.104866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.109274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.109607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.109643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.113992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.114322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.114361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.118813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.119122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.119185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.123720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.124046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.124075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.128499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.128859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.128894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.133290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.133669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.133701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.138253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.138595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.138633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.143037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.143361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.143404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.147 [2024-12-17 00:37:03.148096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.147 [2024-12-17 00:37:03.148451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.147 [2024-12-17 00:37:03.148487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.153199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.153541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.153573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.158079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.158403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.158433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.162890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.163207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.163237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.167806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.168123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.168152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.172546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.172869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.172905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.177296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.177643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.177677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.182271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.182608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.187029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.187346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.187380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.191844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.192168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.408 [2024-12-17 00:37:03.192197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.408 [2024-12-17 00:37:03.196944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.408 [2024-12-17 00:37:03.197292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.197339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.201832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.202152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.202180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.206574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.206908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.206936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.211402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.211711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.211741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.215859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.216176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.216219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.220342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.220669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.220697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.225102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.225411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.225451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.229726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.230067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.234386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.234693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.234721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.239045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.239375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.239402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.243731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.244054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.244081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.248390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.248718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.248746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.252956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.253266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.253293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.257587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.257896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.257931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.262236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.262584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.262607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.266805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.267138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.267187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.271334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.271654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.271677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.275901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.276219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.276247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.280446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.280791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.280821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.285067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.285369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.285406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.289820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.290147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.290178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.294476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.294788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.294817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.299207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.299538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.299567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.409 [2024-12-17 00:37:03.303853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.409 [2024-12-17 00:37:03.304165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.409 [2024-12-17 00:37:03.304192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.308653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.308984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.309017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.313244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.313588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.313621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.317862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.318178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.318205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.322514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.322833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.322874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.327129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.327457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.327486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.331683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.332001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.332029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.336155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.336483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.336509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.340953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.341271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.341293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.345511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.345828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.345856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.350052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.350384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.350421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.354622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.354940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.354968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.359226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.359555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.359583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.364235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.364605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.364645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.370351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.370664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.370695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.375889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.376202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.376233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.380531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.380856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.380880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.385154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.385473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.385500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.389913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.390237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.390268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.394647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.394958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.394988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.399380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.399692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.399718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.403970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.404282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.404319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.410 [2024-12-17 00:37:03.408929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.410 [2024-12-17 00:37:03.409304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.410 [2024-12-17 00:37:03.409352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.413715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.413797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.413821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.418379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.418497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.418520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.423069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.423167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.423187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.427590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.427682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.427703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.432096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.432188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.432209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.436591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.436659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.436680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.441288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.441391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.441426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.445964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.446055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.446076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.450541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.450633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.450653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.455012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.455105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.455125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.459547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.459641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.459661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.463963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.464055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.464075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.468512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.468627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.468647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.473036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.473127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.473148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.477528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.477618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.477639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.482055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.482147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.482167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.486628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.486721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.486741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.491223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.491314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.491335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.495774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.495864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.495884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.672 [2024-12-17 00:37:03.500253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.672 [2024-12-17 00:37:03.500359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.672 [2024-12-17 00:37:03.500380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.504963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.505042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.505062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.509639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.509732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.509752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.514276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.514378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.518796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.518892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.518912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.523641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.523721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.523741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.529748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.529868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.529906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.536517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.536631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.536656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.541308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.541427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.541448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.545934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.546026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.546047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.550449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.550541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.550561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.554913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.555005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.555025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.559419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.559511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.559531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.563904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.563997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.568424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.568515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.568536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.572982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.573073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.573093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.577538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.577629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.577649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.582015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.582107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.582127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.586770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.586863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.586883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.591372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.591445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.591466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.595801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.595894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.595914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.600296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.600402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.600423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.604813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.604922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.604956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.609340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.673 [2024-12-17 00:37:03.609444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.673 [2024-12-17 00:37:03.609464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.673 [2024-12-17 00:37:03.613936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.614029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.614050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.618536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.618629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.618649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.623109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.623198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.623218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.627676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.627768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.627788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.632142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.632234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.632254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.674 6522.00 IOPS, 815.25 MiB/s [2024-12-17T00:37:03.677Z] [2024-12-17 00:37:03.637657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.637751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.637772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.642243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.642345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.642379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.646844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.646935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.646956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.651421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.651499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.651519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.656003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.656094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.656115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.660480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.660597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.660618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.664941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.665051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.665071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.669458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.669549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.669568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.674 [2024-12-17 00:37:03.674241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.674 [2024-12-17 00:37:03.674337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.674 [2024-12-17 00:37:03.674360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.678874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.678973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.678997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.684205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.684283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.684306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.688765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.688836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.688859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.693503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.693593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.693615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.698060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.698150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.698171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.702646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.702737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.702758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.707187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.707277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.707298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.711780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.711878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.711899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.716194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.716284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.716305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.720937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.721044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.721064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.725531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.725607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.725627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.729997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.730090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.730110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.734552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.734648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.734668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.739156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.739246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.739267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.743766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.743855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.935 [2024-12-17 00:37:03.743876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.935 [2024-12-17 00:37:03.748281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.935 [2024-12-17 00:37:03.748391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.748412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.752837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.752917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.752937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.757341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.757449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.757470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.761861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.761951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.761973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.766444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.766533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.766553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.770899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.770988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.771009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.775440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.775528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.775549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.779855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.779945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.779965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.784428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.784520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.784541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.788856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.788979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.789015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.793528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.793618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.793639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.797998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.798089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.798109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.802480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.802585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.802605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.806946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.807035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.807056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.811519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.811609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.811629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.815981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.816070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.816090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.820543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.820681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.820702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.825048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.825137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.825158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.829578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.829666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.829686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.834064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.834156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.834178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.838646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.838755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.838775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.843239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.843350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.843371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.847653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.847746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.847766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.852157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.936 [2024-12-17 00:37:03.852250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.936 [2024-12-17 00:37:03.852269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.936 [2024-12-17 00:37:03.856704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.856784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.856804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.861257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.861359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.861380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.865705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.865794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.870238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.870328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.870361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.874855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.874945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.874965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.879297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.879398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.879419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.883900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.883989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.884009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.888420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.888512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.888532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.892972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.893062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.893083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.897517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.897607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.897627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.901993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.902085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.902105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.906573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.906664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.906684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.911034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.911123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.911143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.915638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.915729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.915749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.920108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.920198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.920218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.924740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.924821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.924842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.929277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.929380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.929401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.937 [2024-12-17 00:37:03.933848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:17.937 [2024-12-17 00:37:03.933939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.937 [2024-12-17 00:37:03.933961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.198 [2024-12-17 00:37:03.938694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.198 [2024-12-17 00:37:03.938775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.938799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.943346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.943436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.943458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.948043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.948121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.948143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.952611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.952696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.952718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.957160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.957250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.957270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.961700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.961789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.961810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.966230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.966320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.966354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.971874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.971986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.972006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.977810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.977904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.977925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.982698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.982805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.982826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.987862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.987953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.987974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.993199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.993297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.993350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:03.998818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:03.998911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:03.998932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.003697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.003805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.003826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.008509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.008617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.008639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.013304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.013438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.013459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.018167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.018257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.018278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.023047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.023139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.023159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.027645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.027734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.027754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.032141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.032231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.032252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.036658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.036741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.036762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.041294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.041398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.041419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.045871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.045961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.045981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.050392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.199 [2024-12-17 00:37:04.050482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.199 [2024-12-17 00:37:04.050502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.199 [2024-12-17 00:37:04.054910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.055005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.055026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.059425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.059527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.059547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.063958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.064056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.064076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.068606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.068684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.068705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.073151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.073240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.073261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.077723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.077814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.077834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.082206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.082305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.082327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.086769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.086859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.086879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.091325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.091428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.091448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.095844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.095933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.095955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.100421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.100511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.100532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.104870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.105001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.105022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.109487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.109576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.109596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.113964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.114053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.114073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.118564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.118654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.118675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.122996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.123086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.123106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.127572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.127662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.127682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.132010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.132099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.132119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.136544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.136645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.136666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.141022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.141111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.141131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.145607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.145699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.145719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.150052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.150143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.150163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.154710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.154800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.154820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.159233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.159323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.200 [2024-12-17 00:37:04.159356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.200 [2024-12-17 00:37:04.163730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.200 [2024-12-17 00:37:04.163819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.163839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.168266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.168379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.168400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.172726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.172800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.172821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.177311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.177426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.177446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.181828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.181917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.181937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.186354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.186445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.186465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.190819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.190910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.190929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.195418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.195519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.195539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.201 [2024-12-17 00:37:04.200581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.201 [2024-12-17 00:37:04.200668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.201 [2024-12-17 00:37:04.200693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.205651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.205756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.205779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.210759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.210841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.210865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.216065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.216133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.216156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.221365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.221507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.226928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.227028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.227050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.231948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.232044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.232066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.236930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.237024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.237046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.241784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.241885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.241907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.462 [2024-12-17 00:37:04.246820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.462 [2024-12-17 00:37:04.246926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.462 [2024-12-17 00:37:04.246947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.251636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.251733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.251754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.256343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.256437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.256458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.260984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.261077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.261098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.265814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.265907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.265928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.270494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.270586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.270607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.275123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.275217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.275238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.279759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.279852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.279873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.284544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.284652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.289255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.289363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.289383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.293972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.294066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.294086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.298666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.298755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.298776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.303529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.303607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.303627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.308300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.308403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.308424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.312881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.313009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.313030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.317896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.317976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.317998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.322642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.322735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.322755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.327228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.327321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.327354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.331799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.331902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.331923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.336708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.336778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.336799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.341373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.341476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.341496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.346026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.346118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.346139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.350814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.350899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.350921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.355520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.463 [2024-12-17 00:37:04.355614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.463 [2024-12-17 00:37:04.355635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.463 [2024-12-17 00:37:04.360115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.360207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.360228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.364924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.365045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.365065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.369968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.370061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.370082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.374600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.374678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.374699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.379188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.379280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.379301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.383998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.384101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.384122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.388715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.388796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.388818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.393460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.393555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.393576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.398393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.398487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.398508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.403078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.403185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.403205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.407830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.407920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.407940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.412372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.412461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.412482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.416867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.416987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.417007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.421468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.421557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.421577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.425936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.426025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.426045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.430537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.430612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.430633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.435120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.435210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.435230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.439691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.439781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.439801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.444324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.444428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.444448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.448771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.448850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.448871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.453328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.453442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.453462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.457842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.457932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.457952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.464 [2024-12-17 00:37:04.462518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.464 [2024-12-17 00:37:04.462604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.464 [2024-12-17 00:37:04.462627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.467349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.467439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.467461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.472002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.472114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.472137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.476767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.476834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.476858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.481355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.481447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.481468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.485842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.485927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.485947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.490401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.490488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.490508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.494857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.495076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.495098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.501148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.501245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.501265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.506532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.506611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.511189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.511272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.511292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.515834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.515920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.515941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.520393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.520470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.520505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.524948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.525033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.525052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.529503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.529588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.529608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.533975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.534053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.534073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.538520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.538608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.538628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.542980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.543060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.725 [2024-12-17 00:37:04.543095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.725 [2024-12-17 00:37:04.547488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.725 [2024-12-17 00:37:04.547572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.547592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.551851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.551936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.551956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.556375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.556456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.556476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.560813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.560888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.560908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.565354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.565616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.565637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.570102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.570180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.570201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.574625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.574710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.574729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.579079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.579155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.579175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.583685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.583768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.583788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.588066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.588145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.588165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.592627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.592692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.592713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.597155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.597234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.597254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.601651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.601731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.601750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.606091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.606172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.606191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.610580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.610662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.610682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.615054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.615124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.615145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.619569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.619648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.619667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.623988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.624069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.624088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.628441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.628520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.628539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:18.726 [2024-12-17 00:37:04.632860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa36550) with pdu=0x2000198fef90 00:21:18.726 [2024-12-17 00:37:04.632974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.726 [2024-12-17 00:37:04.632993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:18.726 6590.50 IOPS, 823.81 MiB/s 00:21:18.726 Latency(us) 00:21:18.726 [2024-12-17T00:37:04.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.726 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:18.726 nvme0n1 : 2.00 6589.15 823.64 0.00 0.00 2422.89 1735.21 6315.29 00:21:18.726 [2024-12-17T00:37:04.729Z] =================================================================================================================== 00:21:18.726 [2024-12-17T00:37:04.729Z] Total : 6589.15 823.64 0.00 0.00 2422.89 1735.21 6315.29 00:21:18.726 { 00:21:18.726 "results": [ 00:21:18.726 { 00:21:18.726 "job": "nvme0n1", 00:21:18.726 "core_mask": "0x2", 00:21:18.726 "workload": "randwrite", 00:21:18.726 "status": "finished", 00:21:18.726 "queue_depth": 16, 00:21:18.726 "io_size": 131072, 00:21:18.726 "runtime": 2.003596, 00:21:18.726 "iops": 6589.152703439217, 00:21:18.726 "mibps": 823.6440879299021, 00:21:18.726 "io_failed": 0, 00:21:18.726 "io_timeout": 0, 00:21:18.726 "avg_latency_us": 2422.8879921774937, 00:21:18.726 "min_latency_us": 1735.2145454545455, 00:21:18.727 "max_latency_us": 6315.2872727272725 00:21:18.727 } 00:21:18.727 ], 00:21:18.727 "core_count": 1 00:21:18.727 } 00:21:18.727 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:18.727 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:18.727 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:18.727 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:18.727 | .driver_specific 00:21:18.727 | .nvme_error 00:21:18.727 | .status_code 00:21:18.727 | .command_transient_transport_error' 00:21:18.986 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 425 > 0 )) 00:21:18.986 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94674 00:21:18.986 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94674 ']' 00:21:18.986 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94674 00:21:18.986 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:18.986 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.986 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94674 00:21:19.256 killing process with pid 94674 00:21:19.256 Received shutdown signal, test time was about 2.000000 seconds 00:21:19.256 00:21:19.256 Latency(us) 00:21:19.256 [2024-12-17T00:37:05.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.256 [2024-12-17T00:37:05.259Z] =================================================================================================================== 00:21:19.256 [2024-12-17T00:37:05.259Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.256 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:19.256 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:19.256 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94674' 00:21:19.256 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94674 00:21:19.256 00:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94674 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94486 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94486 ']' 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94486 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94486 00:21:19.256 killing process with pid 94486 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94486' 00:21:19.256 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94486 00:21:19.257 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94486 00:21:19.522 00:21:19.522 real 0m15.750s 00:21:19.522 user 0m30.850s 00:21:19.522 sys 0m4.373s 00:21:19.522 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.522 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.522 ************************************ 00:21:19.522 END TEST nvmf_digest_error 00:21:19.522 ************************************ 00:21:19.522 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.523 rmmod nvme_tcp 00:21:19.523 rmmod nvme_fabrics 00:21:19.523 rmmod nvme_keyring 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 94486 ']' 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 94486 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 94486 ']' 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 94486 00:21:19.523 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (94486) - No such process 00:21:19.523 Process with pid 94486 is not found 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 94486 is not found' 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:19.523 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:19.800 00:21:19.800 real 0m31.517s 00:21:19.800 user 0m59.703s 00:21:19.800 sys 0m9.045s 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.800 ************************************ 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:19.800 END TEST nvmf_digest 00:21:19.800 ************************************ 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.800 ************************************ 00:21:19.800 START TEST nvmf_host_multipath 00:21:19.800 ************************************ 00:21:19.800 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:20.137 * Looking for test storage... 00:21:20.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.137 --rc genhtml_branch_coverage=1 00:21:20.137 --rc genhtml_function_coverage=1 00:21:20.137 --rc genhtml_legend=1 00:21:20.137 --rc geninfo_all_blocks=1 00:21:20.137 --rc geninfo_unexecuted_blocks=1 00:21:20.137 00:21:20.137 ' 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.137 --rc genhtml_branch_coverage=1 00:21:20.137 --rc genhtml_function_coverage=1 00:21:20.137 --rc genhtml_legend=1 00:21:20.137 --rc geninfo_all_blocks=1 00:21:20.137 --rc geninfo_unexecuted_blocks=1 00:21:20.137 00:21:20.137 ' 00:21:20.137 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.137 --rc genhtml_branch_coverage=1 00:21:20.137 --rc genhtml_function_coverage=1 00:21:20.137 --rc genhtml_legend=1 00:21:20.137 --rc geninfo_all_blocks=1 00:21:20.137 --rc geninfo_unexecuted_blocks=1 00:21:20.137 00:21:20.137 ' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:20.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.138 --rc genhtml_branch_coverage=1 00:21:20.138 --rc genhtml_function_coverage=1 00:21:20.138 --rc genhtml_legend=1 00:21:20.138 --rc geninfo_all_blocks=1 00:21:20.138 --rc geninfo_unexecuted_blocks=1 00:21:20.138 00:21:20.138 ' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.138 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:20.138 00:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:20.138 Cannot find device "nvmf_init_br" 00:21:20.138 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:20.138 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:20.138 Cannot find device "nvmf_init_br2" 00:21:20.138 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:20.139 Cannot find device "nvmf_tgt_br" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:20.139 Cannot find device "nvmf_tgt_br2" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:20.139 Cannot find device "nvmf_init_br" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:20.139 Cannot find device "nvmf_init_br2" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:20.139 Cannot find device "nvmf_tgt_br" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:20.139 Cannot find device "nvmf_tgt_br2" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:20.139 Cannot find device "nvmf_br" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:20.139 Cannot find device "nvmf_init_if" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:20.139 Cannot find device "nvmf_init_if2" 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:20.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:20.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:20.139 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:20.399 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:20.399 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:21:20.399 00:21:20.399 --- 10.0.0.3 ping statistics --- 00:21:20.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.399 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:20.399 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:20.399 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:21:20.399 00:21:20.399 --- 10.0.0.4 ping statistics --- 00:21:20.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.399 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:20.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:20.399 00:21:20.399 --- 10.0.0.1 ping statistics --- 00:21:20.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.399 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:20.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:20.399 00:21:20.399 --- 10.0.0.2 ping statistics --- 00:21:20.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.399 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=94990 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 94990 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:20.399 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94990 ']' 00:21:20.400 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.400 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.400 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.400 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.400 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:20.659 [2024-12-17 00:37:06.434080] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:20.659 [2024-12-17 00:37:06.434164] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.659 [2024-12-17 00:37:06.572476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:20.659 [2024-12-17 00:37:06.615981] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.659 [2024-12-17 00:37:06.616040] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.659 [2024-12-17 00:37:06.616053] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.659 [2024-12-17 00:37:06.616064] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.659 [2024-12-17 00:37:06.616072] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.659 [2024-12-17 00:37:06.618353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.659 [2024-12-17 00:37:06.618389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.659 [2024-12-17 00:37:06.654447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94990 00:21:20.918 00:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:21.176 [2024-12-17 00:37:07.050589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.176 00:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:21.435 Malloc0 00:21:21.435 00:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:21.694 00:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.953 00:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:22.212 [2024-12-17 00:37:08.036777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:22.212 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:22.470 [2024-12-17 00:37:08.320909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95038 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95038 /var/tmp/bdevperf.sock 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95038 ']' 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.470 00:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:23.406 00:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.406 00:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:23.406 00:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:23.665 00:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:23.924 Nvme0n1 00:21:23.924 00:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:24.491 Nvme0n1 00:21:24.491 00:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:24.491 00:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:25.426 00:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:25.427 00:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:25.686 00:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:25.945 00:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:25.945 00:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95083 00:21:25.945 00:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94990 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:25.945 00:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:32.513 00:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:32.513 00:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:32.513 Attaching 4 probes... 00:21:32.513 @path[10.0.0.3, 4421]: 15121 00:21:32.513 @path[10.0.0.3, 4421]: 15454 00:21:32.513 @path[10.0.0.3, 4421]: 19600 00:21:32.513 @path[10.0.0.3, 4421]: 20414 00:21:32.513 @path[10.0.0.3, 4421]: 20584 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95083 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:32.513 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:32.772 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:32.772 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95201 00:21:32.772 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94990 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:32.772 00:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.339 Attaching 4 probes... 00:21:39.339 @path[10.0.0.3, 4420]: 20185 00:21:39.339 @path[10.0.0.3, 4420]: 20584 00:21:39.339 @path[10.0.0.3, 4420]: 20614 00:21:39.339 @path[10.0.0.3, 4420]: 20496 00:21:39.339 @path[10.0.0.3, 4420]: 20568 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95201 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:39.339 00:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:39.339 00:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:39.597 00:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:39.597 00:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95309 00:21:39.597 00:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94990 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:39.597 00:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.163 Attaching 4 probes... 00:21:46.163 @path[10.0.0.3, 4421]: 16574 00:21:46.163 @path[10.0.0.3, 4421]: 20210 00:21:46.163 @path[10.0.0.3, 4421]: 20222 00:21:46.163 @path[10.0.0.3, 4421]: 20346 00:21:46.163 @path[10.0.0.3, 4421]: 20056 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:46.163 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:46.164 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.164 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.164 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95309 00:21:46.164 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.164 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:46.164 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:46.164 00:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:46.422 00:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:46.422 00:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94990 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:46.422 00:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95427 00:21:46.422 00:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:52.983 Attaching 4 probes... 00:21:52.983 00:21:52.983 00:21:52.983 00:21:52.983 00:21:52.983 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95427 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:52.983 00:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:53.242 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:53.242 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95541 00:21:53.242 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:53.242 00:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94990 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:59.864 Attaching 4 probes... 00:21:59.864 @path[10.0.0.3, 4421]: 19474 00:21:59.864 @path[10.0.0.3, 4421]: 20193 00:21:59.864 @path[10.0.0.3, 4421]: 19915 00:21:59.864 @path[10.0.0.3, 4421]: 19931 00:21:59.864 @path[10.0.0.3, 4421]: 19960 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95541 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:59.864 00:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:00.800 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:00.800 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95665 00:22:00.800 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94990 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:00.800 00:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.365 Attaching 4 probes... 00:22:07.365 @path[10.0.0.3, 4420]: 19345 00:22:07.365 @path[10.0.0.3, 4420]: 19825 00:22:07.365 @path[10.0.0.3, 4420]: 19971 00:22:07.365 @path[10.0.0.3, 4420]: 19916 00:22:07.365 @path[10.0.0.3, 4420]: 19944 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95665 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.365 00:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:07.365 [2024-12-17 00:37:53.083344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:07.366 00:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:07.624 00:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:14.186 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:14.186 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95842 00:22:14.186 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94990 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:14.186 00:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:19.453 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:19.453 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:19.712 Attaching 4 probes... 00:22:19.712 @path[10.0.0.3, 4421]: 19315 00:22:19.712 @path[10.0.0.3, 4421]: 19862 00:22:19.712 @path[10.0.0.3, 4421]: 19752 00:22:19.712 @path[10.0.0.3, 4421]: 19728 00:22:19.712 @path[10.0.0.3, 4421]: 19800 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95842 00:22:19.712 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95038 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95038 ']' 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95038 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95038 00:22:19.979 killing process with pid 95038 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95038' 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95038 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95038 00:22:19.979 { 00:22:19.979 "results": [ 00:22:19.979 { 00:22:19.979 "job": "Nvme0n1", 00:22:19.979 "core_mask": "0x4", 00:22:19.979 "workload": "verify", 00:22:19.979 "status": "terminated", 00:22:19.979 "verify_range": { 00:22:19.979 "start": 0, 00:22:19.979 "length": 16384 00:22:19.979 }, 00:22:19.979 "queue_depth": 128, 00:22:19.979 "io_size": 4096, 00:22:19.979 "runtime": 55.38969, 00:22:19.979 "iops": 8364.968282003383, 00:22:19.979 "mibps": 32.675657351575715, 00:22:19.979 "io_failed": 0, 00:22:19.979 "io_timeout": 0, 00:22:19.979 "avg_latency_us": 15274.824855023768, 00:22:19.979 "min_latency_us": 904.8436363636364, 00:22:19.979 "max_latency_us": 7046430.72 00:22:19.979 } 00:22:19.979 ], 00:22:19.979 "core_count": 1 00:22:19.979 } 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95038 00:22:19.979 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:19.979 [2024-12-17 00:37:08.382684] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:19.979 [2024-12-17 00:37:08.382777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95038 ] 00:22:19.979 [2024-12-17 00:37:08.515068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.979 [2024-12-17 00:37:08.557446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.979 [2024-12-17 00:37:08.590387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.979 [2024-12-17 00:37:10.229564] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:22:19.979 Running I/O for 90 seconds... 00:22:19.979 7957.00 IOPS, 31.08 MiB/s [2024-12-17T00:38:05.982Z] 7775.50 IOPS, 30.37 MiB/s [2024-12-17T00:38:05.982Z] 7743.67 IOPS, 30.25 MiB/s [2024-12-17T00:38:05.982Z] 7759.50 IOPS, 30.31 MiB/s [2024-12-17T00:38:05.982Z] 8156.40 IOPS, 31.86 MiB/s [2024-12-17T00:38:05.982Z] 8500.33 IOPS, 33.20 MiB/s [2024-12-17T00:38:05.982Z] 8755.71 IOPS, 34.20 MiB/s [2024-12-17T00:38:05.982Z] 8907.50 IOPS, 34.79 MiB/s [2024-12-17T00:38:05.982Z] [2024-12-17 00:37:18.627301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.979 [2024-12-17 00:37:18.627407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:19.979 [2024-12-17 00:37:18.627476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.979 [2024-12-17 00:37:18.627499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:19.979 [2024-12-17 00:37:18.627522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.979 [2024-12-17 00:37:18.627538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:19.979 [2024-12-17 00:37:18.627559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.627974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.627994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.628008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.628042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.628078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.628736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.628998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.980 [2024-12-17 00:37:18.629285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.629336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.629402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:19.980 [2024-12-17 00:37:18.629425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.980 [2024-12-17 00:37:18.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.629478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.629515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.629559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.629596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.629634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.629977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.629997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.981 [2024-12-17 00:37:18.630849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.981 [2024-12-17 00:37:18.630927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:19.981 [2024-12-17 00:37:18.630947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.630969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.630989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.631479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.631516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.631555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.631593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.631629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.631665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.631714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.631749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.631764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.982 [2024-12-17 00:37:18.633362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.633976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.633996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.982 [2024-12-17 00:37:18.634377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:19.982 [2024-12-17 00:37:18.634398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:18.634428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:18.634470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:18.634487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:19.983 9005.00 IOPS, 35.18 MiB/s [2024-12-17T00:38:05.986Z] 9125.30 IOPS, 35.65 MiB/s [2024-12-17T00:38:05.986Z] 9229.55 IOPS, 36.05 MiB/s [2024-12-17T00:38:05.986Z] 9319.08 IOPS, 36.40 MiB/s [2024-12-17T00:38:05.986Z] 9398.54 IOPS, 36.71 MiB/s [2024-12-17T00:38:05.986Z] 9462.07 IOPS, 36.96 MiB/s [2024-12-17T00:38:05.986Z] [2024-12-17 00:37:25.141485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.141837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.141869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.141901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.141932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.141964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.141982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.141995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.142026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.142058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.983 [2024-12-17 00:37:25.142089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:19.983 [2024-12-17 00:37:25.142670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.983 [2024-12-17 00:37:25.142686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.142965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.142979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.984 [2024-12-17 00:37:25.143592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.143973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.143987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.144012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.144027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:19.984 [2024-12-17 00:37:25.144046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.984 [2024-12-17 00:37:25.144060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.144897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.144972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.144986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.145019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.145058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.145091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.145123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.145156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.985 [2024-12-17 00:37:25.145189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.985 [2024-12-17 00:37:25.145510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:19.985 [2024-12-17 00:37:25.145529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:25.145543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:25.145576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:25.145626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.145995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.146014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.146027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.146046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.146060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.146079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.146092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.146111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.146125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:25.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:25.146517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:19.986 9342.73 IOPS, 36.50 MiB/s [2024-12-17T00:38:05.989Z] 8903.38 IOPS, 34.78 MiB/s [2024-12-17T00:38:05.989Z] 8974.94 IOPS, 35.06 MiB/s [2024-12-17T00:38:05.989Z] 9041.22 IOPS, 35.32 MiB/s [2024-12-17T00:38:05.989Z] 9102.63 IOPS, 35.56 MiB/s [2024-12-17T00:38:05.989Z] 9151.90 IOPS, 35.75 MiB/s [2024-12-17T00:38:05.989Z] 9196.86 IOPS, 35.93 MiB/s [2024-12-17T00:38:05.989Z] [2024-12-17 00:37:32.168097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.986 [2024-12-17 00:37:32.168504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.168984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.168999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.169019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.169034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:19.986 [2024-12-17 00:37:32.169054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.986 [2024-12-17 00:37:32.169085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.169567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.169977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.169996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.987 [2024-12-17 00:37:32.170363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.987 [2024-12-17 00:37:32.170662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:19.987 [2024-12-17 00:37:32.170682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.170698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.170748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.170791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.170863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.170934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.170970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.170991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.988 [2024-12-17 00:37:32.171853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.171893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.171928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.171962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.171982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.171996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.172016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.172045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:19.988 [2024-12-17 00:37:32.172064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.988 [2024-12-17 00:37:32.172078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.172146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.172819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.172873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.172925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.172960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.172989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.173023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.173056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.173089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.173122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.989 [2024-12-17 00:37:32.173155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:32.173791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.989 [2024-12-17 00:37:32.173818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:19.989 9128.64 IOPS, 35.66 MiB/s [2024-12-17T00:38:05.992Z] 8731.74 IOPS, 34.11 MiB/s [2024-12-17T00:38:05.992Z] 8367.92 IOPS, 32.69 MiB/s [2024-12-17T00:38:05.992Z] 8033.20 IOPS, 31.38 MiB/s [2024-12-17T00:38:05.992Z] 7724.23 IOPS, 30.17 MiB/s [2024-12-17T00:38:05.992Z] 7438.15 IOPS, 29.06 MiB/s [2024-12-17T00:38:05.992Z] 7172.50 IOPS, 28.02 MiB/s [2024-12-17T00:38:05.992Z] 6991.34 IOPS, 27.31 MiB/s [2024-12-17T00:38:05.992Z] 7086.03 IOPS, 27.68 MiB/s [2024-12-17T00:38:05.992Z] 7180.55 IOPS, 28.05 MiB/s [2024-12-17T00:38:05.992Z] 7267.91 IOPS, 28.39 MiB/s [2024-12-17T00:38:05.992Z] 7351.42 IOPS, 28.72 MiB/s [2024-12-17T00:38:05.992Z] 7426.97 IOPS, 29.01 MiB/s [2024-12-17T00:38:05.992Z] 7492.71 IOPS, 29.27 MiB/s [2024-12-17T00:38:05.992Z] [2024-12-17 00:37:45.517565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.989 [2024-12-17 00:37:45.517614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:45.517649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.989 [2024-12-17 00:37:45.517661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:45.517674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.989 [2024-12-17 00:37:45.517686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.989 [2024-12-17 00:37:45.517698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.990 [2024-12-17 00:37:45.517710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.517722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3e4a0 is same with the state(6) to be set 00:22:19.990 [2024-12-17 00:37:45.518256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.518596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.518972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.518985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.519018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.519062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.519106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.519138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.519192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.990 [2024-12-17 00:37:45.519243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.990 [2024-12-17 00:37:45.519811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.990 [2024-12-17 00:37:45.519841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.519855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.519868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.519898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.519941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.519969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.519982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.519996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.991 [2024-12-17 00:37:45.520764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.520959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.520973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.991 [2024-12-17 00:37:45.521242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.991 [2024-12-17 00:37:45.521256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.521284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:19.992 [2024-12-17 00:37:45.521961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.521975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.521987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.522015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.522041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.522067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.522093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.522119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:19.992 [2024-12-17 00:37:45.522145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81860 is same with the state(6) to be set 00:22:19.992 [2024-12-17 00:37:45.522172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.992 [2024-12-17 00:37:45.522182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.992 [2024-12-17 00:37:45.522192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:8 PRP1 0x0 PRP2 0x0 00:22:19.992 [2024-12-17 00:37:45.522204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.992 [2024-12-17 00:37:45.522216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.992 [2024-12-17 00:37:45.522225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.992 [2024-12-17 00:37:45.522240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12776 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12784 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12792 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12808 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12816 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12824 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12840 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12848 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12856 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12872 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12880 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12888 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.522916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:19.993 [2024-12-17 00:37:45.522925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:19.993 [2024-12-17 00:37:45.522934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:8 PRP1 0x0 PRP2 0x0 00:22:19.993 [2024-12-17 00:37:45.522946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.993 [2024-12-17 00:37:45.532592] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf81860 was disconnected and freed. reset controller. 00:22:19.993 [2024-12-17 00:37:45.532693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3e4a0 (9): Bad file descriptor 00:22:19.993 [2024-12-17 00:37:45.533857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.993 [2024-12-17 00:37:45.534123] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.993 [2024-12-17 00:37:45.534156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf3e4a0 with addr=10.0.0.3, port=4421 00:22:19.993 [2024-12-17 00:37:45.534172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3e4a0 is same with the state(6) to be set 00:22:19.993 [2024-12-17 00:37:45.534308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf3e4a0 (9): Bad file descriptor 00:22:19.993 [2024-12-17 00:37:45.534412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:19.993 [2024-12-17 00:37:45.534435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:19.993 [2024-12-17 00:37:45.534450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.993 [2024-12-17 00:37:45.534483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:19.993 [2024-12-17 00:37:45.534499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.993 7558.17 IOPS, 29.52 MiB/s [2024-12-17T00:38:05.996Z] 7612.92 IOPS, 29.74 MiB/s [2024-12-17T00:38:05.996Z] 7671.95 IOPS, 29.97 MiB/s [2024-12-17T00:38:05.996Z] 7730.21 IOPS, 30.20 MiB/s [2024-12-17T00:38:05.996Z] 7784.35 IOPS, 30.41 MiB/s [2024-12-17T00:38:05.996Z] 7837.02 IOPS, 30.61 MiB/s [2024-12-17T00:38:05.996Z] 7889.10 IOPS, 30.82 MiB/s [2024-12-17T00:38:05.996Z] 7931.49 IOPS, 30.98 MiB/s [2024-12-17T00:38:05.996Z] 7976.50 IOPS, 31.16 MiB/s [2024-12-17T00:38:05.996Z] 8020.58 IOPS, 31.33 MiB/s [2024-12-17T00:38:05.996Z] [2024-12-17 00:37:55.599286] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.993 8062.72 IOPS, 31.49 MiB/s [2024-12-17T00:38:05.996Z] 8102.74 IOPS, 31.65 MiB/s [2024-12-17T00:38:05.996Z] 8142.10 IOPS, 31.81 MiB/s [2024-12-17T00:38:05.996Z] 8180.35 IOPS, 31.95 MiB/s [2024-12-17T00:38:05.996Z] 8207.46 IOPS, 32.06 MiB/s [2024-12-17T00:38:05.996Z] 8238.84 IOPS, 32.18 MiB/s [2024-12-17T00:38:05.996Z] 8269.94 IOPS, 32.30 MiB/s [2024-12-17T00:38:05.996Z] 8300.92 IOPS, 32.43 MiB/s [2024-12-17T00:38:05.996Z] 8331.72 IOPS, 32.55 MiB/s [2024-12-17T00:38:05.996Z] 8359.65 IOPS, 32.65 MiB/s [2024-12-17T00:38:05.996Z] Received shutdown signal, test time was about 55.390571 seconds 00:22:19.993 00:22:19.993 Latency(us) 00:22:19.993 [2024-12-17T00:38:05.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.993 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:19.993 Verification LBA range: start 0x0 length 0x4000 00:22:19.993 Nvme0n1 : 55.39 8364.97 32.68 0.00 0.00 15274.82 904.84 7046430.72 00:22:19.993 [2024-12-17T00:38:05.996Z] =================================================================================================================== 00:22:19.993 [2024-12-17T00:38:05.996Z] Total : 8364.97 32.68 0.00 0.00 15274.82 904.84 7046430.72 00:22:19.993 [2024-12-17 00:38:05.762984] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:22:19.993 00:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.252 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.252 rmmod nvme_tcp 00:22:20.252 rmmod nvme_fabrics 00:22:20.252 rmmod nvme_keyring 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 94990 ']' 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 94990 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94990 ']' 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94990 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94990 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:20.511 killing process with pid 94990 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94990' 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94990 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94990 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:20.511 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:20.770 00:22:20.770 real 1m0.909s 00:22:20.770 user 2m49.423s 00:22:20.770 sys 0m17.786s 00:22:20.770 ************************************ 00:22:20.770 END TEST nvmf_host_multipath 00:22:20.770 ************************************ 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.770 ************************************ 00:22:20.770 START TEST nvmf_timeout 00:22:20.770 ************************************ 00:22:20.770 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:21.030 * Looking for test storage... 00:22:21.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:21.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.030 --rc genhtml_branch_coverage=1 00:22:21.030 --rc genhtml_function_coverage=1 00:22:21.030 --rc genhtml_legend=1 00:22:21.030 --rc geninfo_all_blocks=1 00:22:21.030 --rc geninfo_unexecuted_blocks=1 00:22:21.030 00:22:21.030 ' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:21.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.030 --rc genhtml_branch_coverage=1 00:22:21.030 --rc genhtml_function_coverage=1 00:22:21.030 --rc genhtml_legend=1 00:22:21.030 --rc geninfo_all_blocks=1 00:22:21.030 --rc geninfo_unexecuted_blocks=1 00:22:21.030 00:22:21.030 ' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:21.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.030 --rc genhtml_branch_coverage=1 00:22:21.030 --rc genhtml_function_coverage=1 00:22:21.030 --rc genhtml_legend=1 00:22:21.030 --rc geninfo_all_blocks=1 00:22:21.030 --rc geninfo_unexecuted_blocks=1 00:22:21.030 00:22:21.030 ' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:21.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.030 --rc genhtml_branch_coverage=1 00:22:21.030 --rc genhtml_function_coverage=1 00:22:21.030 --rc genhtml_legend=1 00:22:21.030 --rc geninfo_all_blocks=1 00:22:21.030 --rc geninfo_unexecuted_blocks=1 00:22:21.030 00:22:21.030 ' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.030 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:21.031 Cannot find device "nvmf_init_br" 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:21.031 Cannot find device "nvmf_init_br2" 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:21.031 Cannot find device "nvmf_tgt_br" 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:21.031 00:38:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:21.031 Cannot find device "nvmf_tgt_br2" 00:22:21.031 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:21.031 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:21.031 Cannot find device "nvmf_init_br" 00:22:21.031 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:21.031 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:21.031 Cannot find device "nvmf_init_br2" 00:22:21.031 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:21.031 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:21.290 Cannot find device "nvmf_tgt_br" 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:21.290 Cannot find device "nvmf_tgt_br2" 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:21.290 Cannot find device "nvmf_br" 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:21.290 Cannot find device "nvmf_init_if" 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:21.290 Cannot find device "nvmf_init_if2" 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:21.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:21.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:21.290 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:21.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:21.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:22:21.549 00:22:21.549 --- 10.0.0.3 ping statistics --- 00:22:21.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.549 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:21.549 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:21.549 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:22:21.549 00:22:21.549 --- 10.0.0.4 ping statistics --- 00:22:21.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.549 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:21.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:21.549 00:22:21.549 --- 10.0.0.1 ping statistics --- 00:22:21.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.549 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:21.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:22:21.549 00:22:21.549 --- 10.0.0.2 ping statistics --- 00:22:21.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.549 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:21.549 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=96200 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 96200 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96200 ']' 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.550 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:21.550 [2024-12-17 00:38:07.427127] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:21.550 [2024-12-17 00:38:07.427221] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.550 [2024-12-17 00:38:07.551649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:21.808 [2024-12-17 00:38:07.585975] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.808 [2024-12-17 00:38:07.586044] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.808 [2024-12-17 00:38:07.586069] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.808 [2024-12-17 00:38:07.586076] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.808 [2024-12-17 00:38:07.586082] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.808 [2024-12-17 00:38:07.586219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.808 [2024-12-17 00:38:07.586228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.808 [2024-12-17 00:38:07.613482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:21.808 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.808 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:21.808 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:21.808 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.808 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:21.808 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.808 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.809 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:22.067 [2024-12-17 00:38:07.968671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.067 00:38:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:22.633 Malloc0 00:22:22.633 00:38:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.633 00:38:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.892 00:38:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:23.150 [2024-12-17 00:38:09.014282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:23.150 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96242 00:22:23.150 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:23.150 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96242 /var/tmp/bdevperf.sock 00:22:23.150 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96242 ']' 00:22:23.151 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.151 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.151 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.151 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.151 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:23.151 [2024-12-17 00:38:09.087359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:23.151 [2024-12-17 00:38:09.087454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96242 ] 00:22:23.409 [2024-12-17 00:38:09.227051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.409 [2024-12-17 00:38:09.269120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.409 [2024-12-17 00:38:09.302029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:24.344 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.344 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:24.344 00:38:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:24.344 00:38:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:24.602 NVMe0n1 00:22:24.602 00:38:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96266 00:22:24.602 00:38:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:24.602 00:38:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:24.861 Running I/O for 10 seconds... 00:22:25.796 00:38:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:26.058 8084.00 IOPS, 31.58 MiB/s [2024-12-17T00:38:12.061Z] [2024-12-17 00:38:11.801209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x835700 is same with the state(6) to be set 00:22:26.058 [2024-12-17 00:38:11.804605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-17 00:38:11.804650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.058 [2024-12-17 00:38:11.804684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22af670 is same with the state(6) to be set 00:22:26.058 [2024-12-17 00:38:11.804707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.804714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.804722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.804731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.804748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.804755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75040 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.804763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.804779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.804786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75048 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.804795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.804810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.804817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75056 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.804825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.804841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.804848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.804856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.804886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.804893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75072 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.804943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.804964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.804988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75080 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.804997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.805006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.805013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.805021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75088 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.805029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.805039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.058 [2024-12-17 00:38:11.805046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.058 [2024-12-17 00:38:11.805054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75096 len:8 PRP1 0x0 PRP2 0x0 00:22:26.058 [2024-12-17 00:38:11.805062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.058 [2024-12-17 00:38:11.805071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75128 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75136 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75144 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75152 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75160 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75168 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75176 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75184 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75192 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75200 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75208 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75216 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75224 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75232 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75240 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75248 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75256 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75264 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75272 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75280 len:8 PRP1 0x0 PRP2 0x0 00:22:26.059 [2024-12-17 00:38:11.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.059 [2024-12-17 00:38:11.805828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.059 [2024-12-17 00:38:11.805835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.059 [2024-12-17 00:38:11.805842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75288 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.805851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.805860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.805867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.805874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75296 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.805882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.805892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.805899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.805906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75304 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.805918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.805927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.805934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.805941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75312 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.805949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.805958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.805965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.805972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75320 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.805980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.805989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.805996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75328 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75336 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75344 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75352 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75360 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75368 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75376 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75384 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75392 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75400 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75408 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75416 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75424 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75432 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75440 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75448 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75456 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75464 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75472 len:8 PRP1 0x0 PRP2 0x0 00:22:26.060 [2024-12-17 00:38:11.806612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.060 [2024-12-17 00:38:11.806623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.060 [2024-12-17 00:38:11.806630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.060 [2024-12-17 00:38:11.806638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75480 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75488 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75496 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75504 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75512 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75520 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75528 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75536 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75544 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75552 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.806972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75560 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.806981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.806991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.806998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.807005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75568 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.807013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.807022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.807029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.807036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75576 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.807044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.807053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.807060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.807067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75584 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.807076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.807084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.807091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.807101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75592 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.807109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.807118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.807125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75600 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75608 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75616 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75624 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75632 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75640 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75648 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75656 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75664 len:8 PRP1 0x0 PRP2 0x0 00:22:26.061 [2024-12-17 00:38:11.817596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.061 [2024-12-17 00:38:11.817605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.061 [2024-12-17 00:38:11.817613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.061 [2024-12-17 00:38:11.817636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75672 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75680 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75688 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 00:38:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:26.062 [2024-12-17 00:38:11.817707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75696 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75704 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75712 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75720 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75728 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74920 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74928 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74936 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.817957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.817966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.817972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.817980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74944 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74952 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74960 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74968 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75736 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75744 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75752 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75760 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75768 len:8 PRP1 0x0 PRP2 0x0 00:22:26.062 [2024-12-17 00:38:11.818264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.062 [2024-12-17 00:38:11.818273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.062 [2024-12-17 00:38:11.818280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.062 [2024-12-17 00:38:11.818288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75776 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75784 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75792 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75800 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75808 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75816 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75824 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75832 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75840 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75848 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75856 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75864 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75872 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75880 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75888 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75896 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75904 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75912 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74976 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74984 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74992 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.818969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.818975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.818983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75000 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.818991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.819000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.819006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.819014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75008 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.819022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.063 [2024-12-17 00:38:11.819031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.063 [2024-12-17 00:38:11.819037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.063 [2024-12-17 00:38:11.819045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75016 len:8 PRP1 0x0 PRP2 0x0 00:22:26.063 [2024-12-17 00:38:11.819053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-12-17 00:38:11.819062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.064 [2024-12-17 00:38:11.819069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.064 [2024-12-17 00:38:11.819076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75024 len:8 PRP1 0x0 PRP2 0x0 00:22:26.064 [2024-12-17 00:38:11.819088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-12-17 00:38:11.819097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.064 [2024-12-17 00:38:11.819104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.064 [2024-12-17 00:38:11.819111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75032 len:8 PRP1 0x0 PRP2 0x0 00:22:26.064 [2024-12-17 00:38:11.819121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-12-17 00:38:11.819163] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22af670 was disconnected and freed. reset controller. 00:22:26.064 [2024-12-17 00:38:11.819279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-12-17 00:38:11.819296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-12-17 00:38:11.819307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-12-17 00:38:11.819331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-12-17 00:38:11.819356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-12-17 00:38:11.819366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-12-17 00:38:11.819375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.064 [2024-12-17 00:38:11.819384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.064 [2024-12-17 00:38:11.819393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e630 is same with the state(6) to be set 00:22:26.064 [2024-12-17 00:38:11.819633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.064 [2024-12-17 00:38:11.819665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e630 (9): Bad file descriptor 00:22:26.064 [2024-12-17 00:38:11.819770] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.064 [2024-12-17 00:38:11.819791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e630 with addr=10.0.0.3, port=4420 00:22:26.064 [2024-12-17 00:38:11.819801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e630 is same with the state(6) to be set 00:22:26.064 [2024-12-17 00:38:11.819817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e630 (9): Bad file descriptor 00:22:26.064 [2024-12-17 00:38:11.819832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.064 [2024-12-17 00:38:11.819840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:26.064 [2024-12-17 00:38:11.819850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.064 [2024-12-17 00:38:11.819869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.064 [2024-12-17 00:38:11.819879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.936 4681.00 IOPS, 18.29 MiB/s [2024-12-17T00:38:13.939Z] 3120.67 IOPS, 12.19 MiB/s [2024-12-17T00:38:13.939Z] [2024-12-17 00:38:13.820050] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.936 [2024-12-17 00:38:13.820127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e630 with addr=10.0.0.3, port=4420 00:22:27.936 [2024-12-17 00:38:13.820143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e630 is same with the state(6) to be set 00:22:27.936 [2024-12-17 00:38:13.820164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e630 (9): Bad file descriptor 00:22:27.936 [2024-12-17 00:38:13.820182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.936 [2024-12-17 00:38:13.820191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:27.936 [2024-12-17 00:38:13.820201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.936 [2024-12-17 00:38:13.820224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:27.936 [2024-12-17 00:38:13.820235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.936 00:38:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:27.936 00:38:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.936 00:38:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:28.227 00:38:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:28.227 00:38:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:28.227 00:38:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:28.227 00:38:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:28.486 00:38:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:28.486 00:38:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:29.677 2340.50 IOPS, 9.14 MiB/s [2024-12-17T00:38:15.938Z] 1872.40 IOPS, 7.31 MiB/s [2024-12-17T00:38:15.938Z] [2024-12-17 00:38:15.820409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.935 [2024-12-17 00:38:15.820471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228e630 with addr=10.0.0.3, port=4420 00:22:29.935 [2024-12-17 00:38:15.820486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228e630 is same with the state(6) to be set 00:22:29.935 [2024-12-17 00:38:15.820508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228e630 (9): Bad file descriptor 00:22:29.935 [2024-12-17 00:38:15.820525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:29.935 [2024-12-17 00:38:15.820533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:29.935 [2024-12-17 00:38:15.820543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:29.935 [2024-12-17 00:38:15.820591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:29.935 [2024-12-17 00:38:15.820602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.805 1560.33 IOPS, 6.10 MiB/s [2024-12-17T00:38:18.067Z] 1337.43 IOPS, 5.22 MiB/s [2024-12-17T00:38:18.067Z] [2024-12-17 00:38:17.820712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.064 [2024-12-17 00:38:17.820750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.064 [2024-12-17 00:38:17.820761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:32.064 [2024-12-17 00:38:17.820771] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:32.064 [2024-12-17 00:38:17.820793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:32.998 1170.25 IOPS, 4.57 MiB/s 00:22:32.998 Latency(us) 00:22:32.998 [2024-12-17T00:38:19.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.998 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.998 Verification LBA range: start 0x0 length 0x4000 00:22:32.998 NVMe0n1 : 8.19 1143.60 4.47 15.64 0.00 110539.48 3693.85 7046430.72 00:22:32.998 [2024-12-17T00:38:19.001Z] =================================================================================================================== 00:22:32.998 [2024-12-17T00:38:19.001Z] Total : 1143.60 4.47 15.64 0.00 110539.48 3693.85 7046430.72 00:22:32.998 { 00:22:32.998 "results": [ 00:22:32.998 { 00:22:32.998 "job": "NVMe0n1", 00:22:32.998 "core_mask": "0x4", 00:22:32.998 "workload": "verify", 00:22:32.998 "status": "finished", 00:22:32.998 "verify_range": { 00:22:32.998 "start": 0, 00:22:32.998 "length": 16384 00:22:32.998 }, 00:22:32.998 "queue_depth": 128, 00:22:32.998 "io_size": 4096, 00:22:32.998 "runtime": 8.186439, 00:22:32.998 "iops": 1143.5985780874933, 00:22:32.998 "mibps": 4.4671819456542705, 00:22:32.998 "io_failed": 128, 00:22:32.998 "io_timeout": 0, 00:22:32.998 "avg_latency_us": 110539.47985515856, 00:22:32.998 "min_latency_us": 3693.847272727273, 00:22:32.998 "max_latency_us": 7046430.72 00:22:32.998 } 00:22:32.998 ], 00:22:32.998 "core_count": 1 00:22:32.998 } 00:22:33.565 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:33.565 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:33.565 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:33.824 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:33.824 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:33.824 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:33.824 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96266 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96242 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96242 ']' 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96242 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96242 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:34.083 killing process with pid 96242 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96242' 00:22:34.083 Received shutdown signal, test time was about 9.281314 seconds 00:22:34.083 00:22:34.083 Latency(us) 00:22:34.083 [2024-12-17T00:38:20.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.083 [2024-12-17T00:38:20.086Z] =================================================================================================================== 00:22:34.083 [2024-12-17T00:38:20.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96242 00:22:34.083 00:38:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96242 00:22:34.083 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:34.342 [2024-12-17 00:38:20.246138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96383 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96383 /var/tmp/bdevperf.sock 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96383 ']' 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.342 00:38:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:34.342 [2024-12-17 00:38:20.307040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:34.342 [2024-12-17 00:38:20.307125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96383 ] 00:22:34.600 [2024-12-17 00:38:20.441051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.600 [2024-12-17 00:38:20.475046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.601 [2024-12-17 00:38:20.504000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:35.534 00:38:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.534 00:38:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:35.534 00:38:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:35.535 00:38:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:35.793 NVMe0n1 00:22:35.793 00:38:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96412 00:22:35.793 00:38:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.793 00:38:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:36.051 Running I/O for 10 seconds... 00:22:36.987 00:38:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:37.257 8832.00 IOPS, 34.50 MiB/s [2024-12-17T00:38:23.260Z] [2024-12-17 00:38:23.063227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x834d50 is same with the state(6) to be set 00:22:37.257 [2024-12-17 00:38:23.063290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x834d50 is same with the state(6) to be set 00:22:37.257 [2024-12-17 00:38:23.063300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x834d50 is same with the state(6) to be set 00:22:37.257 [2024-12-17 00:38:23.063327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x834d50 is same with the state(6) to be set 00:22:37.257 [2024-12-17 00:38:23.063352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x834d50 is same with the state(6) to be set 00:22:37.257 [2024-12-17 00:38:23.063948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.063987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.257 [2024-12-17 00:38:23.064465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.257 [2024-12-17 00:38:23.064717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.257 [2024-12-17 00:38:23.064726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.064975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.064986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.064994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.065607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.065984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.258 [2024-12-17 00:38:23.065993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.258 [2024-12-17 00:38:23.066152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.258 [2024-12-17 00:38:23.066168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.259 [2024-12-17 00:38:23.066177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.259 [2024-12-17 00:38:23.066196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.259 [2024-12-17 00:38:23.066216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.259 [2024-12-17 00:38:23.066235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.259 [2024-12-17 00:38:23.066254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.259 [2024-12-17 00:38:23.066273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.259 [2024-12-17 00:38:23.066292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10819c0 is same with the state(6) to be set 00:22:37.259 [2024-12-17 00:38:23.066323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86776 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87328 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87336 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87344 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87352 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87360 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87368 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87376 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87384 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87392 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87400 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87408 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87416 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87424 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:37.259 [2024-12-17 00:38:23.066776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:37.259 [2024-12-17 00:38:23.066783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87432 len:8 PRP1 0x0 PRP2 0x0 00:22:37.259 [2024-12-17 00:38:23.066791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066830] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10819c0 was disconnected and freed. reset controller. 00:22:37.259 [2024-12-17 00:38:23.066926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.259 [2024-12-17 00:38:23.066953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.259 [2024-12-17 00:38:23.066973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.066982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.259 [2024-12-17 00:38:23.066994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.067003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.259 [2024-12-17 00:38:23.067012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.259 [2024-12-17 00:38:23.067020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10608b0 is same with the state(6) to be set 00:22:37.259 [2024-12-17 00:38:23.067221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.259 [2024-12-17 00:38:23.067260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:37.259 [2024-12-17 00:38:23.067361] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.259 [2024-12-17 00:38:23.067384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10608b0 with addr=10.0.0.3, port=4420 00:22:37.259 [2024-12-17 00:38:23.067395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10608b0 is same with the state(6) to be set 00:22:37.259 [2024-12-17 00:38:23.067413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:37.259 [2024-12-17 00:38:23.067428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:37.259 [2024-12-17 00:38:23.067437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:37.259 [2024-12-17 00:38:23.067447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:37.259 [2024-12-17 00:38:23.067466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:37.259 [2024-12-17 00:38:23.067476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.259 00:38:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:38.193 5401.00 IOPS, 21.10 MiB/s [2024-12-17T00:38:24.196Z] [2024-12-17 00:38:24.067572] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.193 [2024-12-17 00:38:24.067652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10608b0 with addr=10.0.0.3, port=4420 00:22:38.193 [2024-12-17 00:38:24.067668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10608b0 is same with the state(6) to be set 00:22:38.193 [2024-12-17 00:38:24.067689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:38.193 [2024-12-17 00:38:24.067706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.193 [2024-12-17 00:38:24.067715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:38.193 [2024-12-17 00:38:24.067726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.193 [2024-12-17 00:38:24.067761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.193 [2024-12-17 00:38:24.067775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.193 00:38:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:38.452 [2024-12-17 00:38:24.304234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:38.452 00:38:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96412 00:22:39.283 3600.67 IOPS, 14.07 MiB/s [2024-12-17T00:38:25.286Z] [2024-12-17 00:38:25.081439] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:41.156 2700.50 IOPS, 10.55 MiB/s [2024-12-17T00:38:28.093Z] 3878.60 IOPS, 15.15 MiB/s [2024-12-17T00:38:29.028Z] 5024.17 IOPS, 19.63 MiB/s [2024-12-17T00:38:29.963Z] 5872.14 IOPS, 22.94 MiB/s [2024-12-17T00:38:30.903Z] 6482.12 IOPS, 25.32 MiB/s [2024-12-17T00:38:32.278Z] 6944.56 IOPS, 27.13 MiB/s [2024-12-17T00:38:32.278Z] 7321.70 IOPS, 28.60 MiB/s 00:22:46.275 Latency(us) 00:22:46.275 [2024-12-17T00:38:32.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.275 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:46.275 Verification LBA range: start 0x0 length 0x4000 00:22:46.275 NVMe0n1 : 10.01 7325.72 28.62 0.00 0.00 17443.40 1064.96 3019898.88 00:22:46.275 [2024-12-17T00:38:32.278Z] =================================================================================================================== 00:22:46.275 [2024-12-17T00:38:32.278Z] Total : 7325.72 28.62 0.00 0.00 17443.40 1064.96 3019898.88 00:22:46.275 { 00:22:46.275 "results": [ 00:22:46.275 { 00:22:46.275 "job": "NVMe0n1", 00:22:46.275 "core_mask": "0x4", 00:22:46.275 "workload": "verify", 00:22:46.275 "status": "finished", 00:22:46.275 "verify_range": { 00:22:46.275 "start": 0, 00:22:46.275 "length": 16384 00:22:46.275 }, 00:22:46.275 "queue_depth": 128, 00:22:46.275 "io_size": 4096, 00:22:46.275 "runtime": 10.007619, 00:22:46.275 "iops": 7325.718535048147, 00:22:46.275 "mibps": 28.616088027531823, 00:22:46.275 "io_failed": 0, 00:22:46.275 "io_timeout": 0, 00:22:46.275 "avg_latency_us": 17443.40438513323, 00:22:46.275 "min_latency_us": 1064.96, 00:22:46.275 "max_latency_us": 3019898.88 00:22:46.275 } 00:22:46.275 ], 00:22:46.275 "core_count": 1 00:22:46.275 } 00:22:46.275 00:38:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96511 00:22:46.275 00:38:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.275 00:38:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:46.275 Running I/O for 10 seconds... 00:22:47.213 00:38:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:47.213 7972.00 IOPS, 31.14 MiB/s [2024-12-17T00:38:33.216Z] [2024-12-17 00:38:33.170765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.170985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.170994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.171004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.213 [2024-12-17 00:38:33.171012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.213 [2024-12-17 00:38:33.171022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.214 [2024-12-17 00:38:33.171774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.214 [2024-12-17 00:38:33.171837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.214 [2024-12-17 00:38:33.171845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.171863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.171881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.171899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.171917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.215 [2024-12-17 00:38:33.171934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.171952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.171970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.171988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.171997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.215 [2024-12-17 00:38:33.172653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.215 [2024-12-17 00:38:33.172671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.172987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.172996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.216 [2024-12-17 00:38:33.173366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082db0 is same with the state(6) to be set 00:22:47.216 [2024-12-17 00:38:33.173396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:47.216 [2024-12-17 00:38:33.173404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:47.216 [2024-12-17 00:38:33.173412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73616 len:8 PRP1 0x0 PRP2 0x0 00:22:47.216 [2024-12-17 00:38:33.173421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.216 [2024-12-17 00:38:33.173459] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1082db0 was disconnected and freed. reset controller. 00:22:47.216 [2024-12-17 00:38:33.173655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.216 [2024-12-17 00:38:33.173720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:47.216 [2024-12-17 00:38:33.173812] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.216 [2024-12-17 00:38:33.173846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10608b0 with addr=10.0.0.3, port=4420 00:22:47.216 [2024-12-17 00:38:33.173856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10608b0 is same with the state(6) to be set 00:22:47.217 [2024-12-17 00:38:33.173872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:47.217 [2024-12-17 00:38:33.173886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:47.217 [2024-12-17 00:38:33.173895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:47.217 [2024-12-17 00:38:33.173904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:47.217 [2024-12-17 00:38:33.173922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.217 [2024-12-17 00:38:33.173932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:47.217 00:38:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:48.411 4554.50 IOPS, 17.79 MiB/s [2024-12-17T00:38:34.414Z] [2024-12-17 00:38:34.174015] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.411 [2024-12-17 00:38:34.174076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10608b0 with addr=10.0.0.3, port=4420 00:22:48.411 [2024-12-17 00:38:34.174091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10608b0 is same with the state(6) to be set 00:22:48.411 [2024-12-17 00:38:34.174110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:48.411 [2024-12-17 00:38:34.174126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.411 [2024-12-17 00:38:34.174134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:48.411 [2024-12-17 00:38:34.174143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.411 [2024-12-17 00:38:34.174164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.411 [2024-12-17 00:38:34.174174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.344 3036.33 IOPS, 11.86 MiB/s [2024-12-17T00:38:35.347Z] [2024-12-17 00:38:35.174241] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.344 [2024-12-17 00:38:35.174298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10608b0 with addr=10.0.0.3, port=4420 00:22:49.344 [2024-12-17 00:38:35.174338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10608b0 is same with the state(6) to be set 00:22:49.344 [2024-12-17 00:38:35.174359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:49.344 [2024-12-17 00:38:35.174375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.344 [2024-12-17 00:38:35.174383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.344 [2024-12-17 00:38:35.174392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.344 [2024-12-17 00:38:35.174411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.344 [2024-12-17 00:38:35.174421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.298 2277.25 IOPS, 8.90 MiB/s [2024-12-17T00:38:36.301Z] [2024-12-17 00:38:36.177472] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.298 [2024-12-17 00:38:36.177530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10608b0 with addr=10.0.0.3, port=4420 00:22:50.298 [2024-12-17 00:38:36.177543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10608b0 is same with the state(6) to be set 00:22:50.298 [2024-12-17 00:38:36.177752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10608b0 (9): Bad file descriptor 00:22:50.298 [2024-12-17 00:38:36.178023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.298 [2024-12-17 00:38:36.178043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.298 [2024-12-17 00:38:36.178052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.298 [2024-12-17 00:38:36.181622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.298 [2024-12-17 00:38:36.181652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.298 00:38:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:50.570 [2024-12-17 00:38:36.456667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.570 00:38:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96511 00:22:51.395 1821.80 IOPS, 7.12 MiB/s [2024-12-17T00:38:37.398Z] [2024-12-17 00:38:37.214233] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:53.319 3012.67 IOPS, 11.77 MiB/s [2024-12-17T00:38:40.257Z] 4119.29 IOPS, 16.09 MiB/s [2024-12-17T00:38:41.194Z] 4963.38 IOPS, 19.39 MiB/s [2024-12-17T00:38:42.129Z] 5632.11 IOPS, 22.00 MiB/s [2024-12-17T00:38:42.129Z] 6159.50 IOPS, 24.06 MiB/s 00:22:56.126 Latency(us) 00:22:56.126 [2024-12-17T00:38:42.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.126 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:56.126 Verification LBA range: start 0x0 length 0x4000 00:22:56.126 NVMe0n1 : 10.01 6165.31 24.08 4176.73 0.00 12353.61 536.20 3019898.88 00:22:56.126 [2024-12-17T00:38:42.129Z] =================================================================================================================== 00:22:56.126 [2024-12-17T00:38:42.129Z] Total : 6165.31 24.08 4176.73 0.00 12353.61 0.00 3019898.88 00:22:56.126 { 00:22:56.126 "results": [ 00:22:56.126 { 00:22:56.126 "job": "NVMe0n1", 00:22:56.126 "core_mask": "0x4", 00:22:56.126 "workload": "verify", 00:22:56.126 "status": "finished", 00:22:56.126 "verify_range": { 00:22:56.126 "start": 0, 00:22:56.126 "length": 16384 00:22:56.126 }, 00:22:56.126 "queue_depth": 128, 00:22:56.126 "io_size": 4096, 00:22:56.126 "runtime": 10.007121, 00:22:56.126 "iops": 6165.309682974754, 00:22:56.126 "mibps": 24.083240949120132, 00:22:56.126 "io_failed": 41797, 00:22:56.126 "io_timeout": 0, 00:22:56.126 "avg_latency_us": 12353.607998074549, 00:22:56.126 "min_latency_us": 536.2036363636364, 00:22:56.126 "max_latency_us": 3019898.88 00:22:56.126 } 00:22:56.126 ], 00:22:56.126 "core_count": 1 00:22:56.126 } 00:22:56.126 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96383 00:22:56.126 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96383 ']' 00:22:56.126 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96383 00:22:56.126 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:22:56.126 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.126 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96383 00:22:56.126 killing process with pid 96383 00:22:56.126 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.126 00:22:56.126 Latency(us) 00:22:56.126 [2024-12-17T00:38:42.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.126 [2024-12-17T00:38:42.129Z] =================================================================================================================== 00:22:56.126 [2024-12-17T00:38:42.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.127 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:56.127 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:56.127 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96383' 00:22:56.127 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96383 00:22:56.127 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96383 00:22:56.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96626 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96626 /var/tmp/bdevperf.sock 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96626 ']' 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.385 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:56.385 [2024-12-17 00:38:42.284032] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:56.385 [2024-12-17 00:38:42.284138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96626 ] 00:22:56.644 [2024-12-17 00:38:42.421788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.644 [2024-12-17 00:38:42.455461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.644 [2024-12-17 00:38:42.483346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.644 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.644 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:56.644 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96626 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:56.644 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96633 00:22:56.644 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:56.903 00:38:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:57.161 NVMe0n1 00:22:57.161 00:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96670 00:22:57.161 00:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.161 00:38:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:57.420 Running I/O for 10 seconds... 00:22:58.355 00:38:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:58.616 17399.00 IOPS, 67.96 MiB/s [2024-12-17T00:38:44.619Z] [2024-12-17 00:38:44.381680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.616 [2024-12-17 00:38:44.381740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.381771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.616 [2024-12-17 00:38:44.381779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.381788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.616 [2024-12-17 00:38:44.381796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.381805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.616 [2024-12-17 00:38:44.381813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.381821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2f650 is same with the state(6) to be set 00:22:58.616 [2024-12-17 00:38:44.382084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.616 [2024-12-17 00:38:44.382632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.616 [2024-12-17 00:38:44.382641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.382987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.382998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.617 [2024-12-17 00:38:44.383435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.617 [2024-12-17 00:38:44.383445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.383982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.383990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.618 [2024-12-17 00:38:44.384221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.618 [2024-12-17 00:38:44.384231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.619 [2024-12-17 00:38:44.384626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc50810 is same with the state(6) to be set 00:22:58.619 [2024-12-17 00:38:44.384648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:58.619 [2024-12-17 00:38:44.384655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:58.619 [2024-12-17 00:38:44.384664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1328 len:8 PRP1 0x0 PRP2 0x0 00:22:58.619 [2024-12-17 00:38:44.384672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.619 [2024-12-17 00:38:44.384726] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc50810 was disconnected and freed. reset controller. 00:22:58.619 [2024-12-17 00:38:44.384993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.619 [2024-12-17 00:38:44.385039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2f650 (9): Bad file descriptor 00:22:58.619 [2024-12-17 00:38:44.385134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.619 [2024-12-17 00:38:44.385157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2f650 with addr=10.0.0.3, port=4420 00:22:58.619 [2024-12-17 00:38:44.385168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2f650 is same with the state(6) to be set 00:22:58.619 [2024-12-17 00:38:44.385186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2f650 (9): Bad file descriptor 00:22:58.619 [2024-12-17 00:38:44.385201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.619 [2024-12-17 00:38:44.385211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:58.619 [2024-12-17 00:38:44.385221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.619 [2024-12-17 00:38:44.385239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.619 [2024-12-17 00:38:44.385249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.619 00:38:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96670 00:23:00.491 10033.00 IOPS, 39.19 MiB/s [2024-12-17T00:38:46.494Z] 6688.67 IOPS, 26.13 MiB/s [2024-12-17T00:38:46.494Z] [2024-12-17 00:38:46.385405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.491 [2024-12-17 00:38:46.385480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2f650 with addr=10.0.0.3, port=4420 00:23:00.491 [2024-12-17 00:38:46.385494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2f650 is same with the state(6) to be set 00:23:00.491 [2024-12-17 00:38:46.385515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2f650 (9): Bad file descriptor 00:23:00.491 [2024-12-17 00:38:46.385531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.491 [2024-12-17 00:38:46.385540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.491 [2024-12-17 00:38:46.385549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.491 [2024-12-17 00:38:46.385571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.491 [2024-12-17 00:38:46.385582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.361 5016.50 IOPS, 19.60 MiB/s [2024-12-17T00:38:48.622Z] 4013.20 IOPS, 15.68 MiB/s [2024-12-17T00:38:48.622Z] [2024-12-17 00:38:48.385736] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.619 [2024-12-17 00:38:48.385811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2f650 with addr=10.0.0.3, port=4420 00:23:02.619 [2024-12-17 00:38:48.385825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2f650 is same with the state(6) to be set 00:23:02.619 [2024-12-17 00:38:48.385845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2f650 (9): Bad file descriptor 00:23:02.619 [2024-12-17 00:38:48.385861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.619 [2024-12-17 00:38:48.385870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:02.619 [2024-12-17 00:38:48.385880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.619 [2024-12-17 00:38:48.385901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:02.619 [2024-12-17 00:38:48.385911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:04.490 3344.33 IOPS, 13.06 MiB/s [2024-12-17T00:38:50.493Z] 2866.57 IOPS, 11.20 MiB/s [2024-12-17T00:38:50.493Z] [2024-12-17 00:38:50.385994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:04.490 [2024-12-17 00:38:50.386064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:04.490 [2024-12-17 00:38:50.386091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:04.490 [2024-12-17 00:38:50.386100] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:04.490 [2024-12-17 00:38:50.386123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.424 2508.25 IOPS, 9.80 MiB/s 00:23:05.424 Latency(us) 00:23:05.424 [2024-12-17T00:38:51.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.424 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:05.424 NVMe0n1 : 8.16 2458.40 9.60 15.68 0.00 51697.57 6970.65 7015926.69 00:23:05.424 [2024-12-17T00:38:51.427Z] =================================================================================================================== 00:23:05.424 [2024-12-17T00:38:51.427Z] Total : 2458.40 9.60 15.68 0.00 51697.57 6970.65 7015926.69 00:23:05.424 { 00:23:05.424 "results": [ 00:23:05.424 { 00:23:05.424 "job": "NVMe0n1", 00:23:05.424 "core_mask": "0x4", 00:23:05.424 "workload": "randread", 00:23:05.424 "status": "finished", 00:23:05.424 "queue_depth": 128, 00:23:05.424 "io_size": 4096, 00:23:05.424 "runtime": 8.162206, 00:23:05.424 "iops": 2458.40401479698, 00:23:05.424 "mibps": 9.603140682800703, 00:23:05.424 "io_failed": 128, 00:23:05.424 "io_timeout": 0, 00:23:05.424 "avg_latency_us": 51697.56735952172, 00:23:05.424 "min_latency_us": 6970.647272727273, 00:23:05.424 "max_latency_us": 7015926.69090909 00:23:05.424 } 00:23:05.424 ], 00:23:05.424 "core_count": 1 00:23:05.424 } 00:23:05.424 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:05.424 Attaching 5 probes... 00:23:05.424 1362.267734: reset bdev controller NVMe0 00:23:05.424 1362.356622: reconnect bdev controller NVMe0 00:23:05.424 3362.578777: reconnect delay bdev controller NVMe0 00:23:05.424 3362.617620: reconnect bdev controller NVMe0 00:23:05.424 5362.905570: reconnect delay bdev controller NVMe0 00:23:05.424 5362.937223: reconnect bdev controller NVMe0 00:23:05.424 7363.241882: reconnect delay bdev controller NVMe0 00:23:05.424 7363.273857: reconnect bdev controller NVMe0 00:23:05.424 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:05.424 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:05.424 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96633 00:23:05.424 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:05.425 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96626 00:23:05.425 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96626 ']' 00:23:05.425 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96626 00:23:05.425 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:05.425 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.682 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96626 00:23:05.682 killing process with pid 96626 00:23:05.682 Received shutdown signal, test time was about 8.229488 seconds 00:23:05.682 00:23:05.682 Latency(us) 00:23:05.682 [2024-12-17T00:38:51.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.682 [2024-12-17T00:38:51.685Z] =================================================================================================================== 00:23:05.682 [2024-12-17T00:38:51.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.682 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:05.682 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:05.682 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96626' 00:23:05.682 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96626 00:23:05.682 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96626 00:23:05.682 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.941 rmmod nvme_tcp 00:23:05.941 rmmod nvme_fabrics 00:23:05.941 rmmod nvme_keyring 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 96200 ']' 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 96200 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96200 ']' 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96200 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96200 00:23:05.941 killing process with pid 96200 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96200' 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96200 00:23:05.941 00:38:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96200 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:06.200 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:06.459 00:23:06.459 real 0m45.569s 00:23:06.459 user 2m13.790s 00:23:06.459 sys 0m5.378s 00:23:06.459 ************************************ 00:23:06.459 END TEST nvmf_timeout 00:23:06.459 ************************************ 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:06.459 00:23:06.459 real 5m40.284s 00:23:06.459 user 15m58.117s 00:23:06.459 sys 1m15.898s 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:06.459 ************************************ 00:23:06.459 END TEST nvmf_host 00:23:06.459 ************************************ 00:23:06.459 00:38:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.459 00:38:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:06.459 00:38:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:06.459 00:23:06.459 real 14m57.201s 00:23:06.459 user 39m21.766s 00:23:06.459 sys 4m3.376s 00:23:06.459 ************************************ 00:23:06.459 END TEST nvmf_tcp 00:23:06.459 ************************************ 00:23:06.459 00:38:52 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:06.459 00:38:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.459 00:38:52 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:23:06.459 00:38:52 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:06.459 00:38:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:06.459 00:38:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:06.459 00:38:52 -- common/autotest_common.sh@10 -- # set +x 00:23:06.459 ************************************ 00:23:06.459 START TEST nvmf_dif 00:23:06.459 ************************************ 00:23:06.459 00:38:52 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:06.719 * Looking for test storage... 00:23:06.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.719 --rc genhtml_branch_coverage=1 00:23:06.719 --rc genhtml_function_coverage=1 00:23:06.719 --rc genhtml_legend=1 00:23:06.719 --rc geninfo_all_blocks=1 00:23:06.719 --rc geninfo_unexecuted_blocks=1 00:23:06.719 00:23:06.719 ' 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.719 --rc genhtml_branch_coverage=1 00:23:06.719 --rc genhtml_function_coverage=1 00:23:06.719 --rc genhtml_legend=1 00:23:06.719 --rc geninfo_all_blocks=1 00:23:06.719 --rc geninfo_unexecuted_blocks=1 00:23:06.719 00:23:06.719 ' 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.719 --rc genhtml_branch_coverage=1 00:23:06.719 --rc genhtml_function_coverage=1 00:23:06.719 --rc genhtml_legend=1 00:23:06.719 --rc geninfo_all_blocks=1 00:23:06.719 --rc geninfo_unexecuted_blocks=1 00:23:06.719 00:23:06.719 ' 00:23:06.719 00:38:52 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:06.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.719 --rc genhtml_branch_coverage=1 00:23:06.719 --rc genhtml_function_coverage=1 00:23:06.719 --rc genhtml_legend=1 00:23:06.719 --rc geninfo_all_blocks=1 00:23:06.719 --rc geninfo_unexecuted_blocks=1 00:23:06.719 00:23:06.719 ' 00:23:06.719 00:38:52 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.719 00:38:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.719 00:38:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.719 00:38:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.719 00:38:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.719 00:38:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:06.719 00:38:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.719 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.719 00:38:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.719 00:38:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:06.719 00:38:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:06.719 00:38:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:06.719 00:38:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:06.719 00:38:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.720 00:38:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:06.720 00:38:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:06.720 Cannot find device "nvmf_init_br" 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:06.720 Cannot find device "nvmf_init_br2" 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:06.720 Cannot find device "nvmf_tgt_br" 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:06.720 Cannot find device "nvmf_tgt_br2" 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:06.720 Cannot find device "nvmf_init_br" 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:06.720 00:38:52 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:06.978 Cannot find device "nvmf_init_br2" 00:23:06.978 00:38:52 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:06.978 00:38:52 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:06.978 Cannot find device "nvmf_tgt_br" 00:23:06.978 00:38:52 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:06.978 00:38:52 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:06.978 Cannot find device "nvmf_tgt_br2" 00:23:06.978 00:38:52 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:06.978 00:38:52 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:06.979 Cannot find device "nvmf_br" 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:06.979 Cannot find device "nvmf_init_if" 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:06.979 Cannot find device "nvmf_init_if2" 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:06.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:06.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:06.979 00:38:52 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:07.237 00:38:52 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:07.237 00:38:52 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:07.237 00:38:52 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:07.237 00:38:53 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:07.237 00:38:53 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:07.237 00:38:53 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.237 00:38:53 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:07.237 00:38:53 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:07.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:23:07.237 00:23:07.237 --- 10.0.0.3 ping statistics --- 00:23:07.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.237 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:07.237 00:38:53 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:07.237 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:07.237 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:23:07.237 00:23:07.237 --- 10.0.0.4 ping statistics --- 00:23:07.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.237 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:07.237 00:38:53 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:07.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:07.237 00:23:07.237 --- 10.0.0.1 ping statistics --- 00:23:07.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.238 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:07.238 00:38:53 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:07.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:23:07.238 00:23:07.238 --- 10.0.0.2 ping statistics --- 00:23:07.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.238 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:07.238 00:38:53 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.238 00:38:53 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:23:07.238 00:38:53 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:23:07.238 00:38:53 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:07.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:07.496 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:07.496 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:07.496 00:38:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:07.496 00:38:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=97171 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 97171 00:23:07.496 00:38:53 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97171 ']' 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.496 00:38:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:07.755 [2024-12-17 00:38:53.511614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:07.755 [2024-12-17 00:38:53.511878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.755 [2024-12-17 00:38:53.650582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.755 [2024-12-17 00:38:53.694506] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.755 [2024-12-17 00:38:53.694574] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.755 [2024-12-17 00:38:53.694589] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.755 [2024-12-17 00:38:53.694599] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.755 [2024-12-17 00:38:53.694608] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.755 [2024-12-17 00:38:53.694643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.755 [2024-12-17 00:38:53.731314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:23:08.014 00:38:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:08.014 00:38:53 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.014 00:38:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:08.014 00:38:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:08.014 [2024-12-17 00:38:53.827636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.014 00:38:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:08.014 00:38:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:08.014 ************************************ 00:23:08.014 START TEST fio_dif_1_default 00:23:08.014 ************************************ 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:08.014 bdev_null0 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:08.014 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:08.015 [2024-12-17 00:38:53.875768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:08.015 { 00:23:08.015 "params": { 00:23:08.015 "name": "Nvme$subsystem", 00:23:08.015 "trtype": "$TEST_TRANSPORT", 00:23:08.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.015 "adrfam": "ipv4", 00:23:08.015 "trsvcid": "$NVMF_PORT", 00:23:08.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.015 "hdgst": ${hdgst:-false}, 00:23:08.015 "ddgst": ${ddgst:-false} 00:23:08.015 }, 00:23:08.015 "method": "bdev_nvme_attach_controller" 00:23:08.015 } 00:23:08.015 EOF 00:23:08.015 )") 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:08.015 "params": { 00:23:08.015 "name": "Nvme0", 00:23:08.015 "trtype": "tcp", 00:23:08.015 "traddr": "10.0.0.3", 00:23:08.015 "adrfam": "ipv4", 00:23:08.015 "trsvcid": "4420", 00:23:08.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:08.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:08.015 "hdgst": false, 00:23:08.015 "ddgst": false 00:23:08.015 }, 00:23:08.015 "method": "bdev_nvme_attach_controller" 00:23:08.015 }' 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:08.015 00:38:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:08.274 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:08.274 fio-3.35 00:23:08.274 Starting 1 thread 00:23:20.517 00:23:20.517 filename0: (groupid=0, jobs=1): err= 0: pid=97220: Tue Dec 17 00:39:04 2024 00:23:20.517 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(392MiB/10001msec) 00:23:20.517 slat (nsec): min=5811, max=52449, avg=7514.09, stdev=3068.98 00:23:20.517 clat (usec): min=312, max=4643, avg=376.12, stdev=48.78 00:23:20.517 lat (usec): min=318, max=4672, avg=383.63, stdev=49.47 00:23:20.517 clat percentiles (usec): 00:23:20.517 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 347], 00:23:20.517 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 367], 60.00th=[ 375], 00:23:20.517 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 453], 00:23:20.517 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 594], 99.95th=[ 627], 00:23:20.517 | 99.99th=[ 758] 00:23:20.517 bw ( KiB/s): min=37504, max=41856, per=99.87%, avg=40128.00, stdev=1215.44, samples=19 00:23:20.517 iops : min= 9376, max=10464, avg=10032.00, stdev=303.86, samples=19 00:23:20.517 lat (usec) : 500=98.52%, 750=1.47%, 1000=0.01% 00:23:20.517 lat (msec) : 2=0.01%, 10=0.01% 00:23:20.517 cpu : usr=85.24%, sys=13.05%, ctx=17, majf=0, minf=4 00:23:20.517 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:20.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.517 issued rwts: total=100464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.517 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:20.517 00:23:20.517 Run status group 0 (all jobs): 00:23:20.517 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=392MiB (412MB), run=10001-10001msec 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 ************************************ 00:23:20.517 END TEST fio_dif_1_default 00:23:20.517 ************************************ 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 00:23:20.517 real 0m10.866s 00:23:20.517 user 0m9.081s 00:23:20.517 sys 0m1.532s 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 00:39:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:20.517 00:39:04 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:20.517 00:39:04 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 ************************************ 00:23:20.517 START TEST fio_dif_1_multi_subsystems 00:23:20.517 ************************************ 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 bdev_null0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 [2024-12-17 00:39:04.790261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 bdev_null1 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:20.518 { 00:23:20.518 "params": { 00:23:20.518 "name": "Nvme$subsystem", 00:23:20.518 "trtype": "$TEST_TRANSPORT", 00:23:20.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.518 "adrfam": "ipv4", 00:23:20.518 "trsvcid": "$NVMF_PORT", 00:23:20.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.518 "hdgst": ${hdgst:-false}, 00:23:20.518 "ddgst": ${ddgst:-false} 00:23:20.518 }, 00:23:20.518 "method": "bdev_nvme_attach_controller" 00:23:20.518 } 00:23:20.518 EOF 00:23:20.518 )") 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:20.518 { 00:23:20.518 "params": { 00:23:20.518 "name": "Nvme$subsystem", 00:23:20.518 "trtype": "$TEST_TRANSPORT", 00:23:20.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.518 "adrfam": "ipv4", 00:23:20.518 "trsvcid": "$NVMF_PORT", 00:23:20.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.518 "hdgst": ${hdgst:-false}, 00:23:20.518 "ddgst": ${ddgst:-false} 00:23:20.518 }, 00:23:20.518 "method": "bdev_nvme_attach_controller" 00:23:20.518 } 00:23:20.518 EOF 00:23:20.518 )") 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:20.518 "params": { 00:23:20.518 "name": "Nvme0", 00:23:20.518 "trtype": "tcp", 00:23:20.518 "traddr": "10.0.0.3", 00:23:20.518 "adrfam": "ipv4", 00:23:20.518 "trsvcid": "4420", 00:23:20.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:20.518 "hdgst": false, 00:23:20.518 "ddgst": false 00:23:20.518 }, 00:23:20.518 "method": "bdev_nvme_attach_controller" 00:23:20.518 },{ 00:23:20.518 "params": { 00:23:20.518 "name": "Nvme1", 00:23:20.518 "trtype": "tcp", 00:23:20.518 "traddr": "10.0.0.3", 00:23:20.518 "adrfam": "ipv4", 00:23:20.518 "trsvcid": "4420", 00:23:20.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.518 "hdgst": false, 00:23:20.518 "ddgst": false 00:23:20.518 }, 00:23:20.518 "method": "bdev_nvme_attach_controller" 00:23:20.518 }' 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:20.518 00:39:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.518 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:20.518 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:20.518 fio-3.35 00:23:20.518 Starting 2 threads 00:23:30.500 00:23:30.500 filename0: (groupid=0, jobs=1): err= 0: pid=97382: Tue Dec 17 00:39:15 2024 00:23:30.500 read: IOPS=5367, BW=21.0MiB/s (22.0MB/s)(210MiB/10001msec) 00:23:30.500 slat (nsec): min=6202, max=80494, avg=12620.67, stdev=4374.46 00:23:30.500 clat (usec): min=563, max=1708, avg=711.67, stdev=59.92 00:23:30.500 lat (usec): min=570, max=1734, avg=724.29, stdev=60.79 00:23:30.500 clat percentiles (usec): 00:23:30.500 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 668], 00:23:30.500 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:23:30.500 | 70.00th=[ 734], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 824], 00:23:30.500 | 99.00th=[ 914], 99.50th=[ 955], 99.90th=[ 1037], 99.95th=[ 1090], 00:23:30.500 | 99.99th=[ 1188] 00:23:30.500 bw ( KiB/s): min=20928, max=22048, per=49.96%, avg=21450.11, stdev=296.96, samples=19 00:23:30.500 iops : min= 5232, max= 5512, avg=5362.53, stdev=74.24, samples=19 00:23:30.500 lat (usec) : 750=80.94%, 1000=18.88% 00:23:30.500 lat (msec) : 2=0.18% 00:23:30.500 cpu : usr=89.62%, sys=9.07%, ctx=173, majf=0, minf=9 00:23:30.500 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.500 issued rwts: total=53676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.500 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:30.500 filename1: (groupid=0, jobs=1): err= 0: pid=97383: Tue Dec 17 00:39:15 2024 00:23:30.500 read: IOPS=5367, BW=21.0MiB/s (22.0MB/s)(210MiB/10001msec) 00:23:30.500 slat (usec): min=6, max=133, avg=12.94, stdev= 4.50 00:23:30.500 clat (usec): min=605, max=1798, avg=709.50, stdev=55.55 00:23:30.500 lat (usec): min=612, max=1824, avg=722.44, stdev=56.25 00:23:30.500 clat percentiles (usec): 00:23:30.500 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 668], 00:23:30.500 | 30.00th=[ 676], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 709], 00:23:30.500 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 816], 00:23:30.500 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 1037], 99.95th=[ 1090], 00:23:30.500 | 99.99th=[ 1205] 00:23:30.500 bw ( KiB/s): min=20928, max=22048, per=49.96%, avg=21451.79, stdev=295.45, samples=19 00:23:30.500 iops : min= 5232, max= 5512, avg=5362.95, stdev=73.86, samples=19 00:23:30.500 lat (usec) : 750=83.50%, 1000=16.31% 00:23:30.500 lat (msec) : 2=0.19% 00:23:30.500 cpu : usr=89.39%, sys=9.26%, ctx=25, majf=0, minf=9 00:23:30.500 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.500 issued rwts: total=53676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.500 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:30.500 00:23:30.500 Run status group 0 (all jobs): 00:23:30.500 READ: bw=41.9MiB/s (44.0MB/s), 21.0MiB/s-21.0MiB/s (22.0MB/s-22.0MB/s), io=419MiB (440MB), run=10001-10001msec 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.500 ************************************ 00:23:30.500 END TEST fio_dif_1_multi_subsystems 00:23:30.500 ************************************ 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.500 00:23:30.500 real 0m10.954s 00:23:30.500 user 0m18.567s 00:23:30.500 sys 0m2.040s 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.500 00:39:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:30.500 00:39:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:30.500 00:39:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:30.500 00:39:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.500 00:39:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.500 ************************************ 00:23:30.500 START TEST fio_dif_rand_params 00:23:30.500 ************************************ 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.500 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.500 bdev_null0 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.501 [2024-12-17 00:39:15.799042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:30.501 { 00:23:30.501 "params": { 00:23:30.501 "name": "Nvme$subsystem", 00:23:30.501 "trtype": "$TEST_TRANSPORT", 00:23:30.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.501 "adrfam": "ipv4", 00:23:30.501 "trsvcid": "$NVMF_PORT", 00:23:30.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.501 "hdgst": ${hdgst:-false}, 00:23:30.501 "ddgst": ${ddgst:-false} 00:23:30.501 }, 00:23:30.501 "method": "bdev_nvme_attach_controller" 00:23:30.501 } 00:23:30.501 EOF 00:23:30.501 )") 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:30.501 "params": { 00:23:30.501 "name": "Nvme0", 00:23:30.501 "trtype": "tcp", 00:23:30.501 "traddr": "10.0.0.3", 00:23:30.501 "adrfam": "ipv4", 00:23:30.501 "trsvcid": "4420", 00:23:30.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.501 "hdgst": false, 00:23:30.501 "ddgst": false 00:23:30.501 }, 00:23:30.501 "method": "bdev_nvme_attach_controller" 00:23:30.501 }' 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:30.501 00:39:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.501 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:30.501 ... 00:23:30.501 fio-3.35 00:23:30.501 Starting 3 threads 00:23:35.776 00:23:35.776 filename0: (groupid=0, jobs=1): err= 0: pid=97539: Tue Dec 17 00:39:21 2024 00:23:35.776 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(176MiB/5005msec) 00:23:35.776 slat (nsec): min=7210, max=45042, avg=13806.75, stdev=3956.16 00:23:35.776 clat (usec): min=7525, max=15164, avg=10617.37, stdev=576.16 00:23:35.776 lat (usec): min=7538, max=15179, avg=10631.17, stdev=576.56 00:23:35.776 clat percentiles (usec): 00:23:35.776 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:23:35.776 | 30.00th=[10421], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:23:35.776 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:23:35.776 | 99.00th=[12911], 99.50th=[13304], 99.90th=[15139], 99.95th=[15139], 00:23:35.776 | 99.99th=[15139] 00:23:35.776 bw ( KiB/s): min=35328, max=36864, per=33.31%, avg=36010.67, stdev=600.37, samples=9 00:23:35.776 iops : min= 276, max= 288, avg=281.33, stdev= 4.69, samples=9 00:23:35.776 lat (msec) : 10=0.43%, 20=99.57% 00:23:35.776 cpu : usr=91.01%, sys=8.45%, ctx=11, majf=0, minf=9 00:23:35.776 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.776 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.776 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:35.776 filename0: (groupid=0, jobs=1): err= 0: pid=97540: Tue Dec 17 00:39:21 2024 00:23:35.776 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(176MiB/5005msec) 00:23:35.776 slat (nsec): min=6747, max=54066, avg=14269.05, stdev=4139.37 00:23:35.776 clat (usec): min=7521, max=15160, avg=10615.70, stdev=575.96 00:23:35.776 lat (usec): min=7533, max=15176, avg=10629.97, stdev=576.32 00:23:35.776 clat percentiles (usec): 00:23:35.776 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:23:35.776 | 30.00th=[10421], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:23:35.776 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:23:35.776 | 99.00th=[12911], 99.50th=[13304], 99.90th=[15139], 99.95th=[15139], 00:23:35.776 | 99.99th=[15139] 00:23:35.776 bw ( KiB/s): min=35328, max=36864, per=33.31%, avg=36010.67, stdev=600.37, samples=9 00:23:35.776 iops : min= 276, max= 288, avg=281.33, stdev= 4.69, samples=9 00:23:35.776 lat (msec) : 10=0.43%, 20=99.57% 00:23:35.776 cpu : usr=90.47%, sys=8.93%, ctx=8, majf=0, minf=9 00:23:35.776 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.777 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.777 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:35.777 filename0: (groupid=0, jobs=1): err= 0: pid=97541: Tue Dec 17 00:39:21 2024 00:23:35.777 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(176MiB/5001msec) 00:23:35.777 slat (nsec): min=6491, max=58524, avg=9375.56, stdev=4371.66 00:23:35.777 clat (usec): min=10053, max=15069, avg=10639.09, stdev=577.61 00:23:35.777 lat (usec): min=10060, max=15086, avg=10648.46, stdev=578.04 00:23:35.777 clat percentiles (usec): 00:23:35.777 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:23:35.777 | 30.00th=[10421], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:23:35.777 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11600], 00:23:35.777 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15008], 99.95th=[15008], 00:23:35.777 | 99.99th=[15008] 00:23:35.777 bw ( KiB/s): min=35328, max=36864, per=33.31%, avg=36010.67, stdev=461.51, samples=9 00:23:35.777 iops : min= 276, max= 288, avg=281.33, stdev= 3.61, samples=9 00:23:35.777 lat (msec) : 20=100.00% 00:23:35.777 cpu : usr=89.24%, sys=9.88%, ctx=58, majf=0, minf=9 00:23:35.777 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:35.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:35.777 issued rwts: total=1407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:35.777 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:35.777 00:23:35.777 Run status group 0 (all jobs): 00:23:35.777 READ: bw=106MiB/s (111MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=528MiB (554MB), run=5001-5005msec 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 bdev_null0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 [2024-12-17 00:39:21.649201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 bdev_null1 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 bdev_null2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:35.777 { 00:23:35.777 "params": { 00:23:35.777 "name": "Nvme$subsystem", 00:23:35.777 "trtype": "$TEST_TRANSPORT", 00:23:35.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.777 "adrfam": "ipv4", 00:23:35.777 "trsvcid": "$NVMF_PORT", 00:23:35.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.777 "hdgst": ${hdgst:-false}, 00:23:35.777 "ddgst": ${ddgst:-false} 00:23:35.777 }, 00:23:35.777 "method": "bdev_nvme_attach_controller" 00:23:35.777 } 00:23:35.777 EOF 00:23:35.777 )") 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:35.777 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:35.778 { 00:23:35.778 "params": { 00:23:35.778 "name": "Nvme$subsystem", 00:23:35.778 "trtype": "$TEST_TRANSPORT", 00:23:35.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.778 "adrfam": "ipv4", 00:23:35.778 "trsvcid": "$NVMF_PORT", 00:23:35.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.778 "hdgst": ${hdgst:-false}, 00:23:35.778 "ddgst": ${ddgst:-false} 00:23:35.778 }, 00:23:35.778 "method": "bdev_nvme_attach_controller" 00:23:35.778 } 00:23:35.778 EOF 00:23:35.778 )") 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:35.778 { 00:23:35.778 "params": { 00:23:35.778 "name": "Nvme$subsystem", 00:23:35.778 "trtype": "$TEST_TRANSPORT", 00:23:35.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.778 "adrfam": "ipv4", 00:23:35.778 "trsvcid": "$NVMF_PORT", 00:23:35.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.778 "hdgst": ${hdgst:-false}, 00:23:35.778 "ddgst": ${ddgst:-false} 00:23:35.778 }, 00:23:35.778 "method": "bdev_nvme_attach_controller" 00:23:35.778 } 00:23:35.778 EOF 00:23:35.778 )") 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:35.778 "params": { 00:23:35.778 "name": "Nvme0", 00:23:35.778 "trtype": "tcp", 00:23:35.778 "traddr": "10.0.0.3", 00:23:35.778 "adrfam": "ipv4", 00:23:35.778 "trsvcid": "4420", 00:23:35.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:35.778 "hdgst": false, 00:23:35.778 "ddgst": false 00:23:35.778 }, 00:23:35.778 "method": "bdev_nvme_attach_controller" 00:23:35.778 },{ 00:23:35.778 "params": { 00:23:35.778 "name": "Nvme1", 00:23:35.778 "trtype": "tcp", 00:23:35.778 "traddr": "10.0.0.3", 00:23:35.778 "adrfam": "ipv4", 00:23:35.778 "trsvcid": "4420", 00:23:35.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.778 "hdgst": false, 00:23:35.778 "ddgst": false 00:23:35.778 }, 00:23:35.778 "method": "bdev_nvme_attach_controller" 00:23:35.778 },{ 00:23:35.778 "params": { 00:23:35.778 "name": "Nvme2", 00:23:35.778 "trtype": "tcp", 00:23:35.778 "traddr": "10.0.0.3", 00:23:35.778 "adrfam": "ipv4", 00:23:35.778 "trsvcid": "4420", 00:23:35.778 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.778 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:35.778 "hdgst": false, 00:23:35.778 "ddgst": false 00:23:35.778 }, 00:23:35.778 "method": "bdev_nvme_attach_controller" 00:23:35.778 }' 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:35.778 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:36.037 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:36.037 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:36.037 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:36.037 00:39:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.037 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:36.037 ... 00:23:36.037 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:36.037 ... 00:23:36.037 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:36.037 ... 00:23:36.037 fio-3.35 00:23:36.037 Starting 24 threads 00:23:48.338 00:23:48.338 filename0: (groupid=0, jobs=1): err= 0: pid=97636: Tue Dec 17 00:39:32 2024 00:23:48.338 read: IOPS=202, BW=808KiB/s (828kB/s)(8096KiB/10016msec) 00:23:48.338 slat (usec): min=4, max=12056, avg=29.32, stdev=357.20 00:23:48.338 clat (msec): min=29, max=143, avg=79.01, stdev=23.98 00:23:48.338 lat (msec): min=29, max=143, avg=79.04, stdev=23.99 00:23:48.338 clat percentiles (msec): 00:23:48.338 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:23:48.338 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:23:48.338 | 70.00th=[ 88], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 121], 00:23:48.338 | 99.00th=[ 125], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:23:48.338 | 99.99th=[ 144] 00:23:48.338 bw ( KiB/s): min= 662, max= 1000, per=4.36%, avg=803.10, stdev=129.95, samples=20 00:23:48.338 iops : min= 165, max= 250, avg=200.75, stdev=32.52, samples=20 00:23:48.338 lat (msec) : 50=13.69%, 100=61.36%, 250=24.95% 00:23:48.338 cpu : usr=40.43%, sys=2.37%, ctx=1314, majf=0, minf=9 00:23:48.338 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:48.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.338 filename0: (groupid=0, jobs=1): err= 0: pid=97637: Tue Dec 17 00:39:32 2024 00:23:48.338 read: IOPS=200, BW=801KiB/s (820kB/s)(8008KiB/10003msec) 00:23:48.338 slat (usec): min=3, max=8033, avg=36.76, stdev=413.82 00:23:48.338 clat (msec): min=2, max=155, avg=79.80, stdev=25.98 00:23:48.338 lat (msec): min=2, max=155, avg=79.83, stdev=25.98 00:23:48.338 clat percentiles (msec): 00:23:48.338 | 1.00th=[ 6], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:23:48.338 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:48.338 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:23:48.338 | 99.00th=[ 128], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:23:48.338 | 99.99th=[ 157] 00:23:48.338 bw ( KiB/s): min= 616, max= 1024, per=4.25%, avg=782.05, stdev=124.93, samples=19 00:23:48.338 iops : min= 154, max= 256, avg=195.47, stdev=31.26, samples=19 00:23:48.338 lat (msec) : 4=0.15%, 10=1.95%, 20=0.45%, 50=12.59%, 100=57.64% 00:23:48.338 lat (msec) : 250=27.22% 00:23:48.338 cpu : usr=32.06%, sys=1.88%, ctx=928, majf=0, minf=9 00:23:48.338 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:48.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.338 filename0: (groupid=0, jobs=1): err= 0: pid=97638: Tue Dec 17 00:39:32 2024 00:23:48.338 read: IOPS=189, BW=758KiB/s (777kB/s)(7628KiB/10059msec) 00:23:48.338 slat (usec): min=3, max=8023, avg=25.51, stdev=317.51 00:23:48.338 clat (msec): min=28, max=155, avg=84.22, stdev=25.14 00:23:48.338 lat (msec): min=28, max=155, avg=84.25, stdev=25.14 00:23:48.338 clat percentiles (msec): 00:23:48.338 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 64], 00:23:48.338 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 88], 00:23:48.338 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:23:48.338 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 157], 00:23:48.338 | 99.99th=[ 157] 00:23:48.338 bw ( KiB/s): min= 584, max= 1040, per=4.11%, avg=756.80, stdev=142.17, samples=20 00:23:48.338 iops : min= 146, max= 260, avg=189.15, stdev=35.55, samples=20 00:23:48.338 lat (msec) : 50=11.12%, 100=55.95%, 250=32.93% 00:23:48.338 cpu : usr=32.15%, sys=2.06%, ctx=941, majf=0, minf=9 00:23:48.338 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.2%, 16=16.5%, 32=0.0%, >=64=0.0% 00:23:48.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.338 filename0: (groupid=0, jobs=1): err= 0: pid=97639: Tue Dec 17 00:39:32 2024 00:23:48.338 read: IOPS=198, BW=792KiB/s (811kB/s)(7964KiB/10052msec) 00:23:48.338 slat (usec): min=3, max=5025, avg=16.97, stdev=112.40 00:23:48.338 clat (msec): min=24, max=155, avg=80.56, stdev=24.82 00:23:48.338 lat (msec): min=24, max=155, avg=80.57, stdev=24.83 00:23:48.338 clat percentiles (msec): 00:23:48.338 | 1.00th=[ 32], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:23:48.338 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:23:48.338 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 121], 00:23:48.338 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:23:48.338 | 99.99th=[ 157] 00:23:48.338 bw ( KiB/s): min= 616, max= 1232, per=4.30%, avg=791.90, stdev=164.03, samples=20 00:23:48.338 iops : min= 154, max= 308, avg=197.95, stdev=41.03, samples=20 00:23:48.338 lat (msec) : 50=14.31%, 100=58.36%, 250=27.32% 00:23:48.338 cpu : usr=36.73%, sys=2.01%, ctx=1061, majf=0, minf=9 00:23:48.338 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:48.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 issued rwts: total=1991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.338 filename0: (groupid=0, jobs=1): err= 0: pid=97640: Tue Dec 17 00:39:32 2024 00:23:48.338 read: IOPS=190, BW=760KiB/s (779kB/s)(7608KiB/10005msec) 00:23:48.338 slat (usec): min=5, max=8028, avg=27.69, stdev=318.03 00:23:48.338 clat (msec): min=6, max=143, avg=84.00, stdev=23.45 00:23:48.338 lat (msec): min=6, max=143, avg=84.03, stdev=23.44 00:23:48.338 clat percentiles (msec): 00:23:48.338 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 71], 00:23:48.338 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:23:48.338 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:23:48.338 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:23:48.338 | 99.99th=[ 144] 00:23:48.338 bw ( KiB/s): min= 640, max= 912, per=4.08%, avg=750.89, stdev=90.12, samples=19 00:23:48.338 iops : min= 160, max= 228, avg=187.68, stdev=22.55, samples=19 00:23:48.338 lat (msec) : 10=0.63%, 20=0.74%, 50=6.20%, 100=64.88%, 250=27.55% 00:23:48.338 cpu : usr=31.37%, sys=1.89%, ctx=880, majf=0, minf=9 00:23:48.338 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:48.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.338 filename0: (groupid=0, jobs=1): err= 0: pid=97641: Tue Dec 17 00:39:32 2024 00:23:48.338 read: IOPS=188, BW=754KiB/s (773kB/s)(7572KiB/10036msec) 00:23:48.338 slat (usec): min=3, max=4037, avg=24.81, stdev=200.76 00:23:48.338 clat (msec): min=39, max=147, avg=84.60, stdev=21.74 00:23:48.338 lat (msec): min=39, max=147, avg=84.63, stdev=21.73 00:23:48.338 clat percentiles (msec): 00:23:48.338 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 63], 20.00th=[ 69], 00:23:48.338 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:23:48.338 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 121], 00:23:48.338 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:23:48.338 | 99.99th=[ 148] 00:23:48.338 bw ( KiB/s): min= 616, max= 920, per=4.08%, avg=750.80, stdev=111.22, samples=20 00:23:48.338 iops : min= 154, max= 230, avg=187.70, stdev=27.81, samples=20 00:23:48.338 lat (msec) : 50=4.81%, 100=67.67%, 250=27.52% 00:23:48.338 cpu : usr=38.17%, sys=2.30%, ctx=1236, majf=0, minf=9 00:23:48.338 IO depths : 1=0.1%, 2=2.1%, 4=8.4%, 8=74.8%, 16=14.6%, 32=0.0%, >=64=0.0% 00:23:48.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.338 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename0: (groupid=0, jobs=1): err= 0: pid=97642: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=200, BW=804KiB/s (823kB/s)(8100KiB/10078msec) 00:23:48.339 slat (usec): min=3, max=8025, avg=16.46, stdev=178.12 00:23:48.339 clat (usec): min=563, max=152312, avg=79420.69, stdev=29975.45 00:23:48.339 lat (usec): min=573, max=152327, avg=79437.15, stdev=29973.56 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 47], 20.00th=[ 58], 00:23:48.339 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:23:48.339 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:23:48.339 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:23:48.339 | 99.99th=[ 153] 00:23:48.339 bw ( KiB/s): min= 560, max= 1800, per=4.36%, avg=803.10, stdev=275.65, samples=20 00:23:48.339 iops : min= 140, max= 450, avg=200.75, stdev=68.93, samples=20 00:23:48.339 lat (usec) : 750=0.10% 00:23:48.339 lat (msec) : 4=3.65%, 10=1.78%, 50=10.32%, 100=54.81%, 250=29.33% 00:23:48.339 cpu : usr=31.49%, sys=1.73%, ctx=910, majf=0, minf=0 00:23:48.339 IO depths : 1=0.2%, 2=0.8%, 4=2.3%, 8=80.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.339 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename0: (groupid=0, jobs=1): err= 0: pid=97643: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=182, BW=730KiB/s (747kB/s)(7336KiB/10054msec) 00:23:48.339 slat (usec): min=8, max=9025, avg=24.30, stdev=266.72 00:23:48.339 clat (msec): min=28, max=159, avg=87.46, stdev=24.43 00:23:48.339 lat (msec): min=28, max=159, avg=87.49, stdev=24.43 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 36], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 67], 00:23:48.339 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 99], 00:23:48.339 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 121], 00:23:48.339 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 161], 00:23:48.339 | 99.99th=[ 161] 00:23:48.339 bw ( KiB/s): min= 574, max= 952, per=3.96%, avg=729.20, stdev=133.87, samples=20 00:23:48.339 iops : min= 143, max= 238, avg=182.25, stdev=33.47, samples=20 00:23:48.339 lat (msec) : 50=5.83%, 100=56.71%, 250=37.46% 00:23:48.339 cpu : usr=37.02%, sys=2.08%, ctx=1421, majf=0, minf=9 00:23:48.339 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 complete : 0=0.0%, 4=89.6%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 issued rwts: total=1834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.339 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename1: (groupid=0, jobs=1): err= 0: pid=97644: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=189, BW=758KiB/s (776kB/s)(7640KiB/10078msec) 00:23:48.339 slat (usec): min=4, max=8018, avg=17.51, stdev=183.27 00:23:48.339 clat (usec): min=1589, max=155940, avg=84173.37, stdev=32520.43 00:23:48.339 lat (usec): min=1599, max=155956, avg=84190.88, stdev=32522.64 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 48], 20.00th=[ 69], 00:23:48.339 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 96], 00:23:48.339 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 124], 00:23:48.339 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 157], 00:23:48.339 | 99.99th=[ 157] 00:23:48.339 bw ( KiB/s): min= 528, max= 2147, per=4.11%, avg=756.85, stdev=342.12, samples=20 00:23:48.339 iops : min= 132, max= 536, avg=189.15, stdev=85.38, samples=20 00:23:48.339 lat (msec) : 2=0.84%, 4=5.86%, 10=0.84%, 50=3.66%, 100=52.46% 00:23:48.339 lat (msec) : 250=36.34% 00:23:48.339 cpu : usr=33.21%, sys=1.97%, ctx=1020, majf=0, minf=9 00:23:48.339 IO depths : 1=0.4%, 2=3.0%, 4=10.6%, 8=71.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 complete : 0=0.0%, 4=90.5%, 8=7.2%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.339 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename1: (groupid=0, jobs=1): err= 0: pid=97645: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=199, BW=798KiB/s (817kB/s)(8000KiB/10031msec) 00:23:48.339 slat (usec): min=4, max=8027, avg=27.82, stdev=310.14 00:23:48.339 clat (msec): min=33, max=144, avg=80.06, stdev=23.61 00:23:48.339 lat (msec): min=33, max=144, avg=80.09, stdev=23.62 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:23:48.339 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:48.339 | 70.00th=[ 88], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 121], 00:23:48.339 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:23:48.339 | 99.99th=[ 144] 00:23:48.339 bw ( KiB/s): min= 664, max= 1024, per=4.33%, avg=796.10, stdev=130.13, samples=20 00:23:48.339 iops : min= 166, max= 256, avg=199.00, stdev=32.49, samples=20 00:23:48.339 lat (msec) : 50=14.30%, 100=60.55%, 250=25.15% 00:23:48.339 cpu : usr=36.15%, sys=1.87%, ctx=1077, majf=0, minf=9 00:23:48.339 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.339 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename1: (groupid=0, jobs=1): err= 0: pid=97646: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=164, BW=660KiB/s (676kB/s)(6636KiB/10055msec) 00:23:48.339 slat (usec): min=6, max=8027, avg=43.34, stdev=423.05 00:23:48.339 clat (msec): min=48, max=163, avg=96.63, stdev=24.73 00:23:48.339 lat (msec): min=48, max=163, avg=96.68, stdev=24.74 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 51], 5.00th=[ 63], 10.00th=[ 67], 20.00th=[ 72], 00:23:48.339 | 30.00th=[ 77], 40.00th=[ 86], 50.00th=[ 99], 60.00th=[ 107], 00:23:48.339 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 126], 95.00th=[ 144], 00:23:48.339 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 165], 00:23:48.339 | 99.99th=[ 165] 00:23:48.339 bw ( KiB/s): min= 415, max= 896, per=3.57%, avg=657.20, stdev=122.89, samples=20 00:23:48.339 iops : min= 103, max= 224, avg=164.25, stdev=30.78, samples=20 00:23:48.339 lat (msec) : 50=0.24%, 100=50.93%, 250=48.82% 00:23:48.339 cpu : usr=42.10%, sys=2.42%, ctx=1215, majf=0, minf=9 00:23:48.339 IO depths : 1=0.1%, 2=5.5%, 4=22.5%, 8=59.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 complete : 0=0.0%, 4=93.8%, 8=1.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.339 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename1: (groupid=0, jobs=1): err= 0: pid=97647: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=182, BW=729KiB/s (746kB/s)(7320KiB/10048msec) 00:23:48.339 slat (usec): min=6, max=8037, avg=18.45, stdev=187.61 00:23:48.339 clat (msec): min=38, max=155, avg=87.63, stdev=21.98 00:23:48.339 lat (msec): min=38, max=155, avg=87.65, stdev=21.98 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 71], 00:23:48.339 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 96], 00:23:48.339 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:23:48.339 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:23:48.339 | 99.99th=[ 157] 00:23:48.339 bw ( KiB/s): min= 560, max= 897, per=3.94%, avg=725.55, stdev=101.37, samples=20 00:23:48.339 iops : min= 140, max= 224, avg=181.35, stdev=25.35, samples=20 00:23:48.339 lat (msec) : 50=5.14%, 100=63.11%, 250=31.75% 00:23:48.339 cpu : usr=32.41%, sys=1.68%, ctx=912, majf=0, minf=9 00:23:48.339 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=75.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 complete : 0=0.0%, 4=89.4%, 8=9.0%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 issued rwts: total=1830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.339 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename1: (groupid=0, jobs=1): err= 0: pid=97648: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=194, BW=779KiB/s (798kB/s)(7812KiB/10027msec) 00:23:48.339 slat (usec): min=3, max=8026, avg=23.71, stdev=222.07 00:23:48.339 clat (msec): min=34, max=148, avg=82.02, stdev=23.30 00:23:48.339 lat (msec): min=34, max=148, avg=82.05, stdev=23.29 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 64], 00:23:48.339 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 84], 00:23:48.339 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 121], 00:23:48.339 | 99.00th=[ 128], 99.50th=[ 134], 99.90th=[ 146], 99.95th=[ 148], 00:23:48.339 | 99.99th=[ 148] 00:23:48.339 bw ( KiB/s): min= 608, max= 1024, per=4.20%, avg=773.95, stdev=125.60, samples=20 00:23:48.339 iops : min= 152, max= 256, avg=193.45, stdev=31.39, samples=20 00:23:48.339 lat (msec) : 50=10.34%, 100=61.80%, 250=27.85% 00:23:48.339 cpu : usr=40.98%, sys=2.52%, ctx=1201, majf=0, minf=10 00:23:48.339 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.339 issued rwts: total=1953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.339 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.339 filename1: (groupid=0, jobs=1): err= 0: pid=97649: Tue Dec 17 00:39:32 2024 00:23:48.339 read: IOPS=186, BW=746KiB/s (764kB/s)(7504KiB/10053msec) 00:23:48.339 slat (usec): min=5, max=4024, avg=23.84, stdev=181.58 00:23:48.339 clat (msec): min=28, max=158, avg=85.46, stdev=22.52 00:23:48.339 lat (msec): min=28, max=158, avg=85.49, stdev=22.52 00:23:48.339 clat percentiles (msec): 00:23:48.339 | 1.00th=[ 42], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 69], 00:23:48.339 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 88], 00:23:48.339 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 121], 00:23:48.339 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 159], 99.95th=[ 159], 00:23:48.339 | 99.99th=[ 159] 00:23:48.339 bw ( KiB/s): min= 624, max= 1001, per=4.05%, avg=745.95, stdev=105.16, samples=20 00:23:48.339 iops : min= 156, max= 250, avg=186.45, stdev=26.28, samples=20 00:23:48.339 lat (msec) : 50=4.32%, 100=64.98%, 250=30.70% 00:23:48.339 cpu : usr=41.56%, sys=2.37%, ctx=1401, majf=0, minf=9 00:23:48.339 IO depths : 1=0.1%, 2=1.8%, 4=7.5%, 8=75.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:48.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=89.2%, 8=9.1%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename1: (groupid=0, jobs=1): err= 0: pid=97650: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=200, BW=802KiB/s (821kB/s)(8024KiB/10006msec) 00:23:48.340 slat (usec): min=3, max=8025, avg=26.41, stdev=237.18 00:23:48.340 clat (msec): min=7, max=167, avg=79.67, stdev=24.96 00:23:48.340 lat (msec): min=7, max=167, avg=79.70, stdev=24.96 00:23:48.340 clat percentiles (msec): 00:23:48.340 | 1.00th=[ 28], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:23:48.340 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:23:48.340 | 70.00th=[ 92], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 121], 00:23:48.340 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 167], 00:23:48.340 | 99.99th=[ 167] 00:23:48.340 bw ( KiB/s): min= 640, max= 1024, per=4.32%, avg=794.42, stdev=129.10, samples=19 00:23:48.340 iops : min= 160, max= 256, avg=198.58, stdev=32.30, samples=19 00:23:48.340 lat (msec) : 10=0.50%, 20=0.50%, 50=10.72%, 100=62.36%, 250=25.92% 00:23:48.340 cpu : usr=43.07%, sys=2.38%, ctx=1588, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:48.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=2006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename1: (groupid=0, jobs=1): err= 0: pid=97651: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=197, BW=790KiB/s (809kB/s)(7924KiB/10032msec) 00:23:48.340 slat (usec): min=4, max=4034, avg=19.95, stdev=125.41 00:23:48.340 clat (msec): min=31, max=144, avg=80.91, stdev=23.53 00:23:48.340 lat (msec): min=31, max=144, avg=80.93, stdev=23.53 00:23:48.340 clat percentiles (msec): 00:23:48.340 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 61], 00:23:48.340 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:23:48.340 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 121], 00:23:48.340 | 99.00th=[ 124], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:23:48.340 | 99.99th=[ 144] 00:23:48.340 bw ( KiB/s): min= 640, max= 1024, per=4.27%, avg=786.05, stdev=125.64, samples=20 00:23:48.340 iops : min= 160, max= 256, avg=196.50, stdev=31.40, samples=20 00:23:48.340 lat (msec) : 50=10.75%, 100=61.84%, 250=27.41% 00:23:48.340 cpu : usr=42.31%, sys=2.31%, ctx=1254, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:48.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=1981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename2: (groupid=0, jobs=1): err= 0: pid=97652: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=190, BW=763KiB/s (782kB/s)(7648KiB/10020msec) 00:23:48.340 slat (usec): min=4, max=8040, avg=34.48, stdev=377.57 00:23:48.340 clat (msec): min=28, max=144, avg=83.68, stdev=21.93 00:23:48.340 lat (msec): min=28, max=144, avg=83.71, stdev=21.93 00:23:48.340 clat percentiles (msec): 00:23:48.340 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 69], 00:23:48.340 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 84], 00:23:48.340 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:23:48.340 | 99.00th=[ 122], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:23:48.340 | 99.99th=[ 144] 00:23:48.340 bw ( KiB/s): min= 630, max= 1000, per=4.11%, avg=757.95, stdev=111.33, samples=20 00:23:48.340 iops : min= 157, max= 250, avg=189.45, stdev=27.86, samples=20 00:23:48.340 lat (msec) : 50=7.64%, 100=64.59%, 250=27.77% 00:23:48.340 cpu : usr=31.48%, sys=1.51%, ctx=908, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:48.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename2: (groupid=0, jobs=1): err= 0: pid=97653: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=199, BW=798KiB/s (817kB/s)(7980KiB/10002msec) 00:23:48.340 slat (usec): min=4, max=8026, avg=19.16, stdev=179.44 00:23:48.340 clat (usec): min=1056, max=147787, avg=80127.64, stdev=27307.85 00:23:48.340 lat (usec): min=1064, max=147798, avg=80146.80, stdev=27312.23 00:23:48.340 clat percentiles (usec): 00:23:48.340 | 1.00th=[ 1483], 5.00th=[ 39060], 10.00th=[ 47973], 20.00th=[ 62129], 00:23:48.340 | 30.00th=[ 70779], 40.00th=[ 71828], 50.00th=[ 76022], 60.00th=[ 83362], 00:23:48.340 | 70.00th=[ 95945], 80.00th=[107480], 90.00th=[117965], 95.00th=[120062], 00:23:48.340 | 99.00th=[129500], 99.50th=[141558], 99.90th=[147850], 99.95th=[147850], 00:23:48.340 | 99.99th=[147850] 00:23:48.340 bw ( KiB/s): min= 640, max= 968, per=4.16%, avg=765.95, stdev=108.37, samples=19 00:23:48.340 iops : min= 160, max= 242, avg=191.47, stdev=27.10, samples=19 00:23:48.340 lat (msec) : 2=1.60%, 4=0.45%, 10=1.60%, 20=0.50%, 50=9.67% 00:23:48.340 lat (msec) : 100=59.85%, 250=26.32% 00:23:48.340 cpu : usr=31.34%, sys=1.80%, ctx=903, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=78.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:48.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=1995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename2: (groupid=0, jobs=1): err= 0: pid=97654: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=197, BW=791KiB/s (810kB/s)(7920KiB/10008msec) 00:23:48.340 slat (usec): min=4, max=8030, avg=30.81, stdev=336.67 00:23:48.340 clat (msec): min=11, max=167, avg=80.71, stdev=24.30 00:23:48.340 lat (msec): min=11, max=167, avg=80.74, stdev=24.29 00:23:48.340 clat percentiles (msec): 00:23:48.340 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:23:48.340 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:23:48.340 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:23:48.340 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:23:48.340 | 99.99th=[ 167] 00:23:48.340 bw ( KiB/s): min= 664, max= 1024, per=4.28%, avg=787.90, stdev=115.64, samples=20 00:23:48.340 iops : min= 166, max= 256, avg=196.95, stdev=28.94, samples=20 00:23:48.340 lat (msec) : 20=0.66%, 50=13.28%, 100=60.20%, 250=25.86% 00:23:48.340 cpu : usr=35.99%, sys=1.86%, ctx=970, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:48.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename2: (groupid=0, jobs=1): err= 0: pid=97655: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=199, BW=797KiB/s (816kB/s)(8004KiB/10039msec) 00:23:48.340 slat (usec): min=3, max=8025, avg=28.06, stdev=253.23 00:23:48.340 clat (msec): min=34, max=141, avg=80.07, stdev=23.39 00:23:48.340 lat (msec): min=34, max=141, avg=80.10, stdev=23.40 00:23:48.340 clat percentiles (msec): 00:23:48.340 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:23:48.340 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:23:48.340 | 70.00th=[ 92], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 120], 00:23:48.340 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 142], 00:23:48.340 | 99.99th=[ 142] 00:23:48.340 bw ( KiB/s): min= 608, max= 1024, per=4.31%, avg=793.90, stdev=143.72, samples=20 00:23:48.340 iops : min= 152, max= 256, avg=198.45, stdev=35.95, samples=20 00:23:48.340 lat (msec) : 50=12.74%, 100=61.07%, 250=26.19% 00:23:48.340 cpu : usr=42.36%, sys=2.34%, ctx=1282, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:48.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename2: (groupid=0, jobs=1): err= 0: pid=97656: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=180, BW=723KiB/s (740kB/s)(7268KiB/10055msec) 00:23:48.340 slat (usec): min=6, max=4023, avg=19.18, stdev=140.29 00:23:48.340 clat (msec): min=31, max=159, avg=88.32, stdev=23.58 00:23:48.340 lat (msec): min=31, max=159, avg=88.34, stdev=23.58 00:23:48.340 clat percentiles (msec): 00:23:48.340 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 70], 00:23:48.340 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 94], 00:23:48.340 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 123], 00:23:48.340 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 161], 00:23:48.340 | 99.99th=[ 161] 00:23:48.340 bw ( KiB/s): min= 512, max= 920, per=3.91%, avg=720.35, stdev=120.27, samples=20 00:23:48.340 iops : min= 128, max= 230, avg=180.05, stdev=30.08, samples=20 00:23:48.340 lat (msec) : 50=5.17%, 100=57.95%, 250=36.87% 00:23:48.340 cpu : usr=39.84%, sys=2.36%, ctx=1304, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:48.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.340 issued rwts: total=1817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.340 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.340 filename2: (groupid=0, jobs=1): err= 0: pid=97657: Tue Dec 17 00:39:32 2024 00:23:48.340 read: IOPS=196, BW=785KiB/s (804kB/s)(7888KiB/10050msec) 00:23:48.340 slat (usec): min=3, max=3550, avg=16.47, stdev=80.22 00:23:48.340 clat (msec): min=34, max=144, avg=81.36, stdev=23.41 00:23:48.340 lat (msec): min=34, max=144, avg=81.38, stdev=23.41 00:23:48.340 clat percentiles (msec): 00:23:48.340 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 62], 00:23:48.340 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:23:48.340 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 120], 00:23:48.340 | 99.00th=[ 126], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:23:48.340 | 99.99th=[ 144] 00:23:48.340 bw ( KiB/s): min= 606, max= 1024, per=4.25%, avg=782.30, stdev=141.10, samples=20 00:23:48.340 iops : min= 151, max= 256, avg=195.55, stdev=35.31, samples=20 00:23:48.340 lat (msec) : 50=9.84%, 100=62.07%, 250=28.09% 00:23:48.340 cpu : usr=44.68%, sys=2.57%, ctx=1269, majf=0, minf=9 00:23:48.340 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:48.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.341 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.341 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.341 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.341 filename2: (groupid=0, jobs=1): err= 0: pid=97658: Tue Dec 17 00:39:32 2024 00:23:48.341 read: IOPS=200, BW=804KiB/s (823kB/s)(8040KiB/10004msec) 00:23:48.341 slat (usec): min=4, max=12030, avg=37.27, stdev=446.28 00:23:48.341 clat (msec): min=2, max=164, avg=79.46, stdev=25.65 00:23:48.341 lat (msec): min=2, max=164, avg=79.49, stdev=25.64 00:23:48.341 clat percentiles (msec): 00:23:48.341 | 1.00th=[ 6], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 61], 00:23:48.341 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:48.341 | 70.00th=[ 88], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:23:48.341 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 165], 00:23:48.341 | 99.99th=[ 165] 00:23:48.341 bw ( KiB/s): min= 616, max= 1000, per=4.28%, avg=787.05, stdev=115.23, samples=19 00:23:48.341 iops : min= 154, max= 250, avg=196.74, stdev=28.84, samples=19 00:23:48.341 lat (msec) : 4=0.15%, 10=1.89%, 20=0.35%, 50=10.75%, 100=61.44% 00:23:48.341 lat (msec) : 250=25.42% 00:23:48.341 cpu : usr=31.82%, sys=1.71%, ctx=876, majf=0, minf=9 00:23:48.341 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:48.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.341 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.341 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.341 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.341 filename2: (groupid=0, jobs=1): err= 0: pid=97659: Tue Dec 17 00:39:32 2024 00:23:48.341 read: IOPS=186, BW=746KiB/s (764kB/s)(7480KiB/10024msec) 00:23:48.341 slat (usec): min=8, max=8024, avg=33.23, stdev=321.19 00:23:48.341 clat (msec): min=30, max=144, avg=85.56, stdev=20.61 00:23:48.341 lat (msec): min=30, max=144, avg=85.59, stdev=20.60 00:23:48.341 clat percentiles (msec): 00:23:48.341 | 1.00th=[ 47], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 70], 00:23:48.341 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 87], 00:23:48.341 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 120], 00:23:48.341 | 99.00th=[ 127], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:23:48.341 | 99.99th=[ 144] 00:23:48.341 bw ( KiB/s): min= 616, max= 896, per=4.03%, avg=741.15, stdev=90.96, samples=20 00:23:48.341 iops : min= 154, max= 224, avg=185.25, stdev=22.73, samples=20 00:23:48.341 lat (msec) : 50=3.85%, 100=66.90%, 250=29.25% 00:23:48.341 cpu : usr=44.05%, sys=2.49%, ctx=1723, majf=0, minf=9 00:23:48.341 IO depths : 1=0.1%, 2=2.2%, 4=8.9%, 8=74.1%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:48.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.341 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.341 issued rwts: total=1870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.341 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:48.341 00:23:48.341 Run status group 0 (all jobs): 00:23:48.341 READ: bw=18.0MiB/s (18.8MB/s), 660KiB/s-808KiB/s (676kB/s-828kB/s), io=181MiB (190MB), run=10002-10078msec 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 bdev_null0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 [2024-12-17 00:39:32.885350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 bdev_null1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:48.341 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:48.342 { 00:23:48.342 "params": { 00:23:48.342 "name": "Nvme$subsystem", 00:23:48.342 "trtype": "$TEST_TRANSPORT", 00:23:48.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.342 "adrfam": "ipv4", 00:23:48.342 "trsvcid": "$NVMF_PORT", 00:23:48.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.342 "hdgst": ${hdgst:-false}, 00:23:48.342 "ddgst": ${ddgst:-false} 00:23:48.342 }, 00:23:48.342 "method": "bdev_nvme_attach_controller" 00:23:48.342 } 00:23:48.342 EOF 00:23:48.342 )") 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:48.342 { 00:23:48.342 "params": { 00:23:48.342 "name": "Nvme$subsystem", 00:23:48.342 "trtype": "$TEST_TRANSPORT", 00:23:48.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.342 "adrfam": "ipv4", 00:23:48.342 "trsvcid": "$NVMF_PORT", 00:23:48.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.342 "hdgst": ${hdgst:-false}, 00:23:48.342 "ddgst": ${ddgst:-false} 00:23:48.342 }, 00:23:48.342 "method": "bdev_nvme_attach_controller" 00:23:48.342 } 00:23:48.342 EOF 00:23:48.342 )") 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:48.342 "params": { 00:23:48.342 "name": "Nvme0", 00:23:48.342 "trtype": "tcp", 00:23:48.342 "traddr": "10.0.0.3", 00:23:48.342 "adrfam": "ipv4", 00:23:48.342 "trsvcid": "4420", 00:23:48.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:48.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:48.342 "hdgst": false, 00:23:48.342 "ddgst": false 00:23:48.342 }, 00:23:48.342 "method": "bdev_nvme_attach_controller" 00:23:48.342 },{ 00:23:48.342 "params": { 00:23:48.342 "name": "Nvme1", 00:23:48.342 "trtype": "tcp", 00:23:48.342 "traddr": "10.0.0.3", 00:23:48.342 "adrfam": "ipv4", 00:23:48.342 "trsvcid": "4420", 00:23:48.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.342 "hdgst": false, 00:23:48.342 "ddgst": false 00:23:48.342 }, 00:23:48.342 "method": "bdev_nvme_attach_controller" 00:23:48.342 }' 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:48.342 00:39:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.342 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:48.342 ... 00:23:48.342 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:48.342 ... 00:23:48.342 fio-3.35 00:23:48.342 Starting 4 threads 00:23:53.619 00:23:53.619 filename0: (groupid=0, jobs=1): err= 0: pid=97798: Tue Dec 17 00:39:38 2024 00:23:53.619 read: IOPS=2097, BW=16.4MiB/s (17.2MB/s)(81.9MiB/5001msec) 00:23:53.619 slat (nsec): min=6728, max=71849, avg=14517.02, stdev=4380.15 00:23:53.619 clat (usec): min=3044, max=5223, avg=3757.79, stdev=204.06 00:23:53.619 lat (usec): min=3057, max=5249, avg=3772.30, stdev=204.55 00:23:53.619 clat percentiles (usec): 00:23:53.619 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3621], 00:23:53.619 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:23:53.619 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4113], 00:23:53.619 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 5145], 00:23:53.619 | 99.99th=[ 5211] 00:23:53.619 bw ( KiB/s): min=15663, max=17314, per=24.33%, avg=16781.30, stdev=608.12, samples=10 00:23:53.619 iops : min= 1957, max= 2164, avg=2097.50, stdev=76.14, samples=10 00:23:53.619 lat (msec) : 4=89.24%, 10=10.76% 00:23:53.619 cpu : usr=91.74%, sys=7.46%, ctx=5, majf=0, minf=0 00:23:53.619 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 issued rwts: total=10488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:53.619 filename0: (groupid=0, jobs=1): err= 0: pid=97799: Tue Dec 17 00:39:38 2024 00:23:53.619 read: IOPS=2314, BW=18.1MiB/s (19.0MB/s)(90.4MiB/5001msec) 00:23:53.619 slat (nsec): min=6547, max=57925, avg=12083.69, stdev=4621.59 00:23:53.619 clat (usec): min=609, max=6622, avg=3414.06, stdev=723.87 00:23:53.619 lat (usec): min=617, max=6637, avg=3426.15, stdev=724.69 00:23:53.619 clat percentiles (usec): 00:23:53.619 | 1.00th=[ 1254], 5.00th=[ 1336], 10.00th=[ 2671], 20.00th=[ 3458], 00:23:53.619 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3687], 00:23:53.619 | 70.00th=[ 3720], 80.00th=[ 3785], 90.00th=[ 3884], 95.00th=[ 3982], 00:23:53.619 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4424], 99.95th=[ 4490], 00:23:53.619 | 99.99th=[ 4621] 00:23:53.619 bw ( KiB/s): min=17024, max=22384, per=27.05%, avg=18659.56, stdev=2281.29, samples=9 00:23:53.619 iops : min= 2128, max= 2798, avg=2332.44, stdev=285.16, samples=9 00:23:53.619 lat (usec) : 750=0.16%, 1000=0.15% 00:23:53.619 lat (msec) : 2=8.91%, 4=86.76%, 10=4.03% 00:23:53.619 cpu : usr=91.50%, sys=7.66%, ctx=9, majf=0, minf=9 00:23:53.619 IO depths : 1=0.1%, 2=16.7%, 4=54.6%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 issued rwts: total=11576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:53.619 filename1: (groupid=0, jobs=1): err= 0: pid=97800: Tue Dec 17 00:39:38 2024 00:23:53.619 read: IOPS=2110, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5001msec) 00:23:53.619 slat (nsec): min=6773, max=59530, avg=14225.62, stdev=4760.61 00:23:53.619 clat (usec): min=903, max=6922, avg=3735.05, stdev=249.11 00:23:53.619 lat (usec): min=912, max=6936, avg=3749.27, stdev=249.58 00:23:53.619 clat percentiles (usec): 00:23:53.619 | 1.00th=[ 2999], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:23:53.619 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:23:53.619 | 70.00th=[ 3818], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4080], 00:23:53.619 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4621], 99.95th=[ 4621], 00:23:53.619 | 99.99th=[ 4686] 00:23:53.619 bw ( KiB/s): min=15663, max=17442, per=24.49%, avg=16888.50, stdev=565.28, samples=10 00:23:53.619 iops : min= 1957, max= 2180, avg=2110.90, stdev=70.82, samples=10 00:23:53.619 lat (usec) : 1000=0.05% 00:23:53.619 lat (msec) : 2=0.26%, 4=89.53%, 10=10.17% 00:23:53.619 cpu : usr=91.06%, sys=8.20%, ctx=7, majf=0, minf=0 00:23:53.619 IO depths : 1=0.1%, 2=24.5%, 4=50.3%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 issued rwts: total=10553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:53.619 filename1: (groupid=0, jobs=1): err= 0: pid=97801: Tue Dec 17 00:39:38 2024 00:23:53.619 read: IOPS=2098, BW=16.4MiB/s (17.2MB/s)(82.0MiB/5001msec) 00:23:53.619 slat (nsec): min=6776, max=62867, avg=14900.91, stdev=4597.75 00:23:53.619 clat (usec): min=1265, max=5234, avg=3753.27, stdev=213.05 00:23:53.619 lat (usec): min=1291, max=5268, avg=3768.17, stdev=213.65 00:23:53.619 clat percentiles (usec): 00:23:53.619 | 1.00th=[ 3425], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:23:53.619 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:23:53.619 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4113], 00:23:53.619 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 5145], 00:23:53.619 | 99.99th=[ 5211] 00:23:53.619 bw ( KiB/s): min=15647, max=17314, per=24.34%, avg=16784.50, stdev=605.74, samples=10 00:23:53.619 iops : min= 1955, max= 2164, avg=2097.90, stdev=75.85, samples=10 00:23:53.619 lat (msec) : 2=0.08%, 4=89.35%, 10=10.58% 00:23:53.619 cpu : usr=91.66%, sys=7.56%, ctx=171, majf=0, minf=0 00:23:53.619 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.619 issued rwts: total=10496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:53.619 00:23:53.619 Run status group 0 (all jobs): 00:23:53.619 READ: bw=67.4MiB/s (70.6MB/s), 16.4MiB/s-18.1MiB/s (17.2MB/s-19.0MB/s), io=337MiB (353MB), run=5001-5001msec 00:23:53.619 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:53.619 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:53.619 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:53.619 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 ************************************ 00:23:53.620 END TEST fio_dif_rand_params 00:23:53.620 ************************************ 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:23:53.620 real 0m23.053s 00:23:53.620 user 2m3.489s 00:23:53.620 sys 0m8.680s 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 00:39:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:53.620 00:39:38 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:53.620 00:39:38 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 ************************************ 00:23:53.620 START TEST fio_dif_digest 00:23:53.620 ************************************ 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 bdev_null0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:53.620 [2024-12-17 00:39:38.899362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:53.620 { 00:23:53.620 "params": { 00:23:53.620 "name": "Nvme$subsystem", 00:23:53.620 "trtype": "$TEST_TRANSPORT", 00:23:53.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.620 "adrfam": "ipv4", 00:23:53.620 "trsvcid": "$NVMF_PORT", 00:23:53.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.620 "hdgst": ${hdgst:-false}, 00:23:53.620 "ddgst": ${ddgst:-false} 00:23:53.620 }, 00:23:53.620 "method": "bdev_nvme_attach_controller" 00:23:53.620 } 00:23:53.620 EOF 00:23:53.620 )") 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:53.620 "params": { 00:23:53.620 "name": "Nvme0", 00:23:53.620 "trtype": "tcp", 00:23:53.620 "traddr": "10.0.0.3", 00:23:53.620 "adrfam": "ipv4", 00:23:53.620 "trsvcid": "4420", 00:23:53.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:53.620 "hdgst": true, 00:23:53.620 "ddgst": true 00:23:53.620 }, 00:23:53.620 "method": "bdev_nvme_attach_controller" 00:23:53.620 }' 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:53.620 00:39:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.620 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:53.620 ... 00:23:53.620 fio-3.35 00:23:53.620 Starting 3 threads 00:24:03.598 00:24:03.598 filename0: (groupid=0, jobs=1): err= 0: pid=97901: Tue Dec 17 00:39:49 2024 00:24:03.598 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(313MiB/10001msec) 00:24:03.598 slat (nsec): min=6846, max=52266, avg=9799.61, stdev=4341.65 00:24:03.598 clat (usec): min=4531, max=13952, avg=11953.29, stdev=516.55 00:24:03.598 lat (usec): min=4540, max=13982, avg=11963.09, stdev=516.83 00:24:03.598 clat percentiles (usec): 00:24:03.598 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11731], 00:24:03.598 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:24:03.598 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[13042], 00:24:03.598 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13960], 99.95th=[13960], 00:24:03.598 | 99.99th=[13960] 00:24:03.598 bw ( KiB/s): min=31488, max=33024, per=33.39%, avg=32094.32, stdev=484.30, samples=19 00:24:03.598 iops : min= 246, max= 258, avg=250.74, stdev= 3.78, samples=19 00:24:03.598 lat (msec) : 10=0.24%, 20=99.76% 00:24:03.598 cpu : usr=91.08%, sys=8.42%, ctx=17, majf=0, minf=0 00:24:03.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.598 issued rwts: total=2505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:03.598 filename0: (groupid=0, jobs=1): err= 0: pid=97902: Tue Dec 17 00:39:49 2024 00:24:03.598 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(313MiB/10009msec) 00:24:03.598 slat (nsec): min=7082, max=57082, avg=14554.57, stdev=4054.12 00:24:03.598 clat (usec): min=8219, max=15003, avg=11953.41, stdev=473.33 00:24:03.598 lat (usec): min=8232, max=15028, avg=11967.97, stdev=473.76 00:24:03.598 clat percentiles (usec): 00:24:03.598 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11731], 00:24:03.598 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:24:03.598 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12911], 00:24:03.598 | 99.00th=[13566], 99.50th=[13829], 99.90th=[15008], 99.95th=[15008], 00:24:03.598 | 99.99th=[15008] 00:24:03.598 bw ( KiB/s): min=31488, max=33024, per=33.35%, avg=32053.89, stdev=501.79, samples=19 00:24:03.598 iops : min= 246, max= 258, avg=250.42, stdev= 3.92, samples=19 00:24:03.598 lat (msec) : 10=0.24%, 20=99.76% 00:24:03.598 cpu : usr=91.58%, sys=7.86%, ctx=13, majf=0, minf=0 00:24:03.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.598 issued rwts: total=2505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:03.598 filename0: (groupid=0, jobs=1): err= 0: pid=97903: Tue Dec 17 00:39:49 2024 00:24:03.598 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(313MiB/10009msec) 00:24:03.598 slat (nsec): min=7192, max=44846, avg=13952.47, stdev=4040.72 00:24:03.598 clat (usec): min=8223, max=15216, avg=11955.80, stdev=474.77 00:24:03.598 lat (usec): min=8236, max=15238, avg=11969.76, stdev=475.07 00:24:03.598 clat percentiles (usec): 00:24:03.598 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11731], 00:24:03.598 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11863], 00:24:03.598 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[13042], 00:24:03.598 | 99.00th=[13566], 99.50th=[13829], 99.90th=[15139], 99.95th=[15270], 00:24:03.598 | 99.99th=[15270] 00:24:03.598 bw ( KiB/s): min=31488, max=33024, per=33.35%, avg=32053.89, stdev=501.79, samples=19 00:24:03.598 iops : min= 246, max= 258, avg=250.42, stdev= 3.92, samples=19 00:24:03.598 lat (msec) : 10=0.24%, 20=99.76% 00:24:03.598 cpu : usr=91.65%, sys=7.85%, ctx=6, majf=0, minf=9 00:24:03.598 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.598 issued rwts: total=2505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.598 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:03.598 00:24:03.598 Run status group 0 (all jobs): 00:24:03.598 READ: bw=93.9MiB/s (98.4MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=939MiB (985MB), run=10001-10009msec 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.857 ************************************ 00:24:03.857 END TEST fio_dif_digest 00:24:03.857 ************************************ 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.857 00:24:03.857 real 0m10.849s 00:24:03.857 user 0m27.985s 00:24:03.857 sys 0m2.637s 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.857 00:39:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:03.857 00:39:49 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:03.857 00:39:49 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:03.857 00:39:49 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:03.857 00:39:49 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:03.857 00:39:49 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.857 00:39:49 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:03.857 00:39:49 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.857 00:39:49 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.857 rmmod nvme_tcp 00:24:03.857 rmmod nvme_fabrics 00:24:03.857 rmmod nvme_keyring 00:24:03.857 00:39:49 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.115 00:39:49 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:04.115 00:39:49 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:04.115 00:39:49 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 97171 ']' 00:24:04.115 00:39:49 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 97171 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97171 ']' 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97171 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97171 00:24:04.115 killing process with pid 97171 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97171' 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97171 00:24:04.115 00:39:49 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97171 00:24:04.115 00:39:50 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:04.115 00:39:50 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:04.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:04.633 Waiting for block devices as requested 00:24:04.633 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.633 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.633 00:39:50 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.893 00:39:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:04.893 00:39:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.893 00:39:50 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:04.893 00:24:04.893 real 0m58.413s 00:24:04.893 user 3m45.292s 00:24:04.893 sys 0m19.881s 00:24:04.893 00:39:50 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.893 ************************************ 00:24:04.893 END TEST nvmf_dif 00:24:04.893 00:39:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:04.893 ************************************ 00:24:04.893 00:39:50 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:04.893 00:39:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:04.893 00:39:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:04.893 00:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:05.152 ************************************ 00:24:05.152 START TEST nvmf_abort_qd_sizes 00:24:05.152 ************************************ 00:24:05.152 00:39:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:05.152 * Looking for test storage... 00:24:05.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:05.152 00:39:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:05.152 00:39:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:24:05.152 00:39:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.152 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:05.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.152 --rc genhtml_branch_coverage=1 00:24:05.152 --rc genhtml_function_coverage=1 00:24:05.152 --rc genhtml_legend=1 00:24:05.152 --rc geninfo_all_blocks=1 00:24:05.153 --rc geninfo_unexecuted_blocks=1 00:24:05.153 00:24:05.153 ' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:05.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.153 --rc genhtml_branch_coverage=1 00:24:05.153 --rc genhtml_function_coverage=1 00:24:05.153 --rc genhtml_legend=1 00:24:05.153 --rc geninfo_all_blocks=1 00:24:05.153 --rc geninfo_unexecuted_blocks=1 00:24:05.153 00:24:05.153 ' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:05.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.153 --rc genhtml_branch_coverage=1 00:24:05.153 --rc genhtml_function_coverage=1 00:24:05.153 --rc genhtml_legend=1 00:24:05.153 --rc geninfo_all_blocks=1 00:24:05.153 --rc geninfo_unexecuted_blocks=1 00:24:05.153 00:24:05.153 ' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:05.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.153 --rc genhtml_branch_coverage=1 00:24:05.153 --rc genhtml_function_coverage=1 00:24:05.153 --rc genhtml_legend=1 00:24:05.153 --rc geninfo_all_blocks=1 00:24:05.153 --rc geninfo_unexecuted_blocks=1 00:24:05.153 00:24:05.153 ' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:05.153 Cannot find device "nvmf_init_br" 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:05.153 Cannot find device "nvmf_init_br2" 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:05.153 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:05.412 Cannot find device "nvmf_tgt_br" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.412 Cannot find device "nvmf_tgt_br2" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.412 Cannot find device "nvmf_init_br" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:05.412 Cannot find device "nvmf_init_br2" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:05.412 Cannot find device "nvmf_tgt_br" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:05.412 Cannot find device "nvmf_tgt_br2" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:05.412 Cannot find device "nvmf_br" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:05.412 Cannot find device "nvmf_init_if" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:05.412 Cannot find device "nvmf_init_if2" 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.412 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:24:05.671 00:24:05.671 --- 10.0.0.3 ping statistics --- 00:24:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.671 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.671 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.671 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:24:05.671 00:24:05.671 --- 10.0.0.4 ping statistics --- 00:24:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.671 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:05.671 00:24:05.671 --- 10.0.0.1 ping statistics --- 00:24:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.671 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:24:05.671 00:24:05.671 --- 10.0.0.2 ping statistics --- 00:24:05.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.671 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:05.671 00:39:51 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:06.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:06.497 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:06.497 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:06.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=98545 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 98545 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98545 ']' 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.497 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:06.497 [2024-12-17 00:39:52.498091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:06.497 [2024-12-17 00:39:52.498436] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.756 [2024-12-17 00:39:52.637975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.756 [2024-12-17 00:39:52.683972] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.756 [2024-12-17 00:39:52.684270] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.756 [2024-12-17 00:39:52.684455] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.756 [2024-12-17 00:39:52.684471] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.756 [2024-12-17 00:39:52.684481] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.756 [2024-12-17 00:39:52.685106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.756 [2024-12-17 00:39:52.685277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.756 [2024-12-17 00:39:52.685357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.756 [2024-12-17 00:39:52.685366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.756 [2024-12-17 00:39:52.722233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:07.015 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.016 00:39:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 ************************************ 00:24:07.016 START TEST spdk_target_abort 00:24:07.016 ************************************ 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 spdk_targetn1 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 [2024-12-17 00:39:52.948756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:07.016 [2024-12-17 00:39:52.977500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:07.016 00:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:10.303 Initializing NVMe Controllers 00:24:10.303 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:10.303 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:10.303 Initialization complete. Launching workers. 00:24:10.303 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9835, failed: 0 00:24:10.303 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1087, failed to submit 8748 00:24:10.303 success 874, unsuccessful 213, failed 0 00:24:10.303 00:39:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:10.303 00:39:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:13.591 Initializing NVMe Controllers 00:24:13.591 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:13.591 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:13.591 Initialization complete. Launching workers. 00:24:13.591 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9024, failed: 0 00:24:13.591 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1180, failed to submit 7844 00:24:13.591 success 375, unsuccessful 805, failed 0 00:24:13.591 00:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:13.592 00:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:16.883 Initializing NVMe Controllers 00:24:16.883 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:16.883 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:16.883 Initialization complete. Launching workers. 00:24:16.883 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31593, failed: 0 00:24:16.883 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2286, failed to submit 29307 00:24:16.883 success 484, unsuccessful 1802, failed 0 00:24:16.883 00:40:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:16.883 00:40:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.883 00:40:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:16.883 00:40:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.883 00:40:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:16.883 00:40:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.883 00:40:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98545 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98545 ']' 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98545 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98545 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98545' 00:24:17.142 killing process with pid 98545 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98545 00:24:17.142 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98545 00:24:17.401 00:24:17.401 real 0m10.314s 00:24:17.401 user 0m39.510s 00:24:17.401 sys 0m2.121s 00:24:17.401 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:17.401 ************************************ 00:24:17.401 END TEST spdk_target_abort 00:24:17.401 ************************************ 00:24:17.401 00:40:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.401 00:40:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:17.401 00:40:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:17.401 00:40:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:17.401 00:40:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:17.401 ************************************ 00:24:17.401 START TEST kernel_target_abort 00:24:17.401 ************************************ 00:24:17.401 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:24:17.401 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:17.401 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:17.402 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:17.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.661 Waiting for block devices as requested 00:24:17.920 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:17.920 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:17.920 No valid GPT data, bailing 00:24:17.920 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:18.179 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:18.179 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:18.179 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:18.180 No valid GPT data, bailing 00:24:18.180 00:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:18.180 No valid GPT data, bailing 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:18.180 No valid GPT data, bailing 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:24:18.180 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 --hostid=93817295-c2e4-400f-aefe-caa93fc06858 -a 10.0.0.1 -t tcp -s 4420 00:24:18.439 00:24:18.439 Discovery Log Number of Records 2, Generation counter 2 00:24:18.439 =====Discovery Log Entry 0====== 00:24:18.439 trtype: tcp 00:24:18.439 adrfam: ipv4 00:24:18.439 subtype: current discovery subsystem 00:24:18.439 treq: not specified, sq flow control disable supported 00:24:18.439 portid: 1 00:24:18.439 trsvcid: 4420 00:24:18.439 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:18.439 traddr: 10.0.0.1 00:24:18.439 eflags: none 00:24:18.439 sectype: none 00:24:18.439 =====Discovery Log Entry 1====== 00:24:18.439 trtype: tcp 00:24:18.439 adrfam: ipv4 00:24:18.439 subtype: nvme subsystem 00:24:18.439 treq: not specified, sq flow control disable supported 00:24:18.439 portid: 1 00:24:18.439 trsvcid: 4420 00:24:18.439 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:18.439 traddr: 10.0.0.1 00:24:18.439 eflags: none 00:24:18.439 sectype: none 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:18.439 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:18.440 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:18.440 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:18.440 00:40:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:21.729 Initializing NVMe Controllers 00:24:21.729 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:21.729 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:21.729 Initialization complete. Launching workers. 00:24:21.729 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32105, failed: 0 00:24:21.729 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32105, failed to submit 0 00:24:21.729 success 0, unsuccessful 32105, failed 0 00:24:21.729 00:40:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:21.729 00:40:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:25.018 Initializing NVMe Controllers 00:24:25.018 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:25.018 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:25.018 Initialization complete. Launching workers. 00:24:25.018 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65132, failed: 0 00:24:25.018 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26453, failed to submit 38679 00:24:25.018 success 0, unsuccessful 26453, failed 0 00:24:25.018 00:40:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:25.018 00:40:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:28.305 Initializing NVMe Controllers 00:24:28.305 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:28.305 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:28.305 Initialization complete. Launching workers. 00:24:28.305 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69085, failed: 0 00:24:28.305 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17304, failed to submit 51781 00:24:28.305 success 0, unsuccessful 17304, failed 0 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:28.305 00:40:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:28.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:29.132 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:29.132 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:29.132 ************************************ 00:24:29.132 END TEST kernel_target_abort 00:24:29.132 ************************************ 00:24:29.132 00:24:29.132 real 0m11.875s 00:24:29.132 user 0m5.637s 00:24:29.132 sys 0m3.615s 00:24:29.132 00:40:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:29.132 00:40:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.390 rmmod nvme_tcp 00:24:29.390 rmmod nvme_fabrics 00:24:29.390 rmmod nvme_keyring 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.390 Process with pid 98545 is not found 00:24:29.390 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 98545 ']' 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 98545 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98545 ']' 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98545 00:24:29.391 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98545) - No such process 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98545 is not found' 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:29.391 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:29.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:29.649 Waiting for block devices as requested 00:24:29.908 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:29.908 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:29.908 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.167 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:30.167 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:30.167 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:30.167 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:30.167 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:30.167 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:30.167 00:40:15 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:30.167 00:40:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.167 00:40:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:30.167 00:40:16 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:30.167 00:40:16 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.167 00:40:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:30.167 00:40:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.168 00:40:16 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:30.168 00:24:30.168 real 0m25.199s 00:24:30.168 user 0m46.279s 00:24:30.168 sys 0m7.187s 00:24:30.168 ************************************ 00:24:30.168 END TEST nvmf_abort_qd_sizes 00:24:30.168 ************************************ 00:24:30.168 00:40:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.168 00:40:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:30.168 00:40:16 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:30.168 00:40:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:30.168 00:40:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.168 00:40:16 -- common/autotest_common.sh@10 -- # set +x 00:24:30.168 ************************************ 00:24:30.168 START TEST keyring_file 00:24:30.168 ************************************ 00:24:30.168 00:40:16 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:30.427 * Looking for test storage... 00:24:30.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:30.427 00:40:16 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:30.427 00:40:16 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:24:30.427 00:40:16 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:30.427 00:40:16 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.427 00:40:16 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.428 00:40:16 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:30.428 00:40:16 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.428 00:40:16 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:30.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.428 --rc genhtml_branch_coverage=1 00:24:30.428 --rc genhtml_function_coverage=1 00:24:30.428 --rc genhtml_legend=1 00:24:30.428 --rc geninfo_all_blocks=1 00:24:30.428 --rc geninfo_unexecuted_blocks=1 00:24:30.428 00:24:30.428 ' 00:24:30.428 00:40:16 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:30.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.428 --rc genhtml_branch_coverage=1 00:24:30.428 --rc genhtml_function_coverage=1 00:24:30.428 --rc genhtml_legend=1 00:24:30.428 --rc geninfo_all_blocks=1 00:24:30.428 --rc geninfo_unexecuted_blocks=1 00:24:30.428 00:24:30.428 ' 00:24:30.428 00:40:16 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:30.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.428 --rc genhtml_branch_coverage=1 00:24:30.428 --rc genhtml_function_coverage=1 00:24:30.428 --rc genhtml_legend=1 00:24:30.428 --rc geninfo_all_blocks=1 00:24:30.428 --rc geninfo_unexecuted_blocks=1 00:24:30.428 00:24:30.428 ' 00:24:30.428 00:40:16 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:30.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.428 --rc genhtml_branch_coverage=1 00:24:30.428 --rc genhtml_function_coverage=1 00:24:30.428 --rc genhtml_legend=1 00:24:30.428 --rc geninfo_all_blocks=1 00:24:30.428 --rc geninfo_unexecuted_blocks=1 00:24:30.428 00:24:30.428 ' 00:24:30.428 00:40:16 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:30.428 00:40:16 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.428 00:40:16 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.428 00:40:16 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.428 00:40:16 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.428 00:40:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.428 00:40:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.428 00:40:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.428 00:40:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:30.428 00:40:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.428 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:30.428 00:40:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:30.428 00:40:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:30.428 00:40:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:30.428 00:40:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:30.428 00:40:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:30.428 00:40:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pxLNHc1F08 00:24:30.428 00:40:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:30.428 00:40:16 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pxLNHc1F08 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pxLNHc1F08 00:24:30.688 00:40:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pxLNHc1F08 00:24:30.688 00:40:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zmX8dMueYy 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:30.688 00:40:16 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:30.688 00:40:16 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:30.688 00:40:16 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:30.688 00:40:16 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:24:30.688 00:40:16 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:30.688 00:40:16 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zmX8dMueYy 00:24:30.688 00:40:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zmX8dMueYy 00:24:30.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.688 00:40:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.zmX8dMueYy 00:24:30.688 00:40:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=99445 00:24:30.688 00:40:16 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:30.688 00:40:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99445 00:24:30.688 00:40:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99445 ']' 00:24:30.688 00:40:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.688 00:40:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.688 00:40:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.688 00:40:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.688 00:40:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:30.688 [2024-12-17 00:40:16.561813] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:30.688 [2024-12-17 00:40:16.562076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99445 ] 00:24:30.947 [2024-12-17 00:40:16.701030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.947 [2024-12-17 00:40:16.744476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.947 [2024-12-17 00:40:16.788669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.947 00:40:16 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.947 00:40:16 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:30.947 00:40:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:30.947 00:40:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.947 00:40:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:30.947 [2024-12-17 00:40:16.934678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.205 null0 00:24:31.205 [2024-12-17 00:40:16.966627] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.205 [2024-12-17 00:40:16.966979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.205 00:40:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.205 00:40:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:31.205 [2024-12-17 00:40:16.994617] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:31.205 request: 00:24:31.205 { 00:24:31.205 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:31.205 "secure_channel": false, 00:24:31.205 "listen_address": { 00:24:31.205 "trtype": "tcp", 00:24:31.205 "traddr": "127.0.0.1", 00:24:31.205 "trsvcid": "4420" 00:24:31.205 }, 00:24:31.205 "method": "nvmf_subsystem_add_listener", 00:24:31.205 "req_id": 1 00:24:31.205 } 00:24:31.205 Got JSON-RPC error response 00:24:31.205 response: 00:24:31.205 { 00:24:31.205 "code": -32602, 00:24:31.205 "message": "Invalid parameters" 00:24:31.205 } 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.205 00:40:17 keyring_file -- keyring/file.sh@47 -- # bperfpid=99450 00:24:31.205 00:40:17 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:31.205 00:40:17 keyring_file -- keyring/file.sh@49 -- # waitforlisten 99450 /var/tmp/bperf.sock 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99450 ']' 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:31.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.205 00:40:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:31.206 [2024-12-17 00:40:17.057236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:31.206 [2024-12-17 00:40:17.057530] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99450 ] 00:24:31.206 [2024-12-17 00:40:17.196669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.464 [2024-12-17 00:40:17.239029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.464 [2024-12-17 00:40:17.272340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:31.464 00:40:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.464 00:40:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:31.464 00:40:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:31.464 00:40:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:31.722 00:40:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zmX8dMueYy 00:24:31.722 00:40:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zmX8dMueYy 00:24:31.981 00:40:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:31.981 00:40:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:31.981 00:40:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:31.981 00:40:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:31.981 00:40:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:32.239 00:40:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pxLNHc1F08 == \/\t\m\p\/\t\m\p\.\p\x\L\N\H\c\1\F\0\8 ]] 00:24:32.239 00:40:18 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:32.239 00:40:18 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:32.239 00:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:32.239 00:40:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:32.239 00:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:32.497 00:40:18 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.zmX8dMueYy == \/\t\m\p\/\t\m\p\.\z\m\X\8\d\M\u\e\Y\y ]] 00:24:32.497 00:40:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:32.497 00:40:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:32.497 00:40:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:32.497 00:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:32.497 00:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:32.497 00:40:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:32.755 00:40:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:32.755 00:40:18 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:32.755 00:40:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:32.755 00:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:32.755 00:40:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:32.755 00:40:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:32.755 00:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:33.014 00:40:18 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:33.014 00:40:18 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:33.014 00:40:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:33.272 [2024-12-17 00:40:19.070454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.272 nvme0n1 00:24:33.272 00:40:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:33.272 00:40:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:33.272 00:40:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:33.272 00:40:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:33.272 00:40:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:33.272 00:40:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:33.531 00:40:19 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:33.531 00:40:19 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:33.531 00:40:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:33.531 00:40:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:33.531 00:40:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:33.531 00:40:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:33.531 00:40:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:33.789 00:40:19 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:33.789 00:40:19 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:33.789 Running I/O for 1 seconds... 00:24:34.997 13822.00 IOPS, 53.99 MiB/s 00:24:34.997 Latency(us) 00:24:34.997 [2024-12-17T00:40:21.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.997 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:34.997 nvme0n1 : 1.01 13853.88 54.12 0.00 0.00 9215.16 3455.53 14120.03 00:24:34.997 [2024-12-17T00:40:21.000Z] =================================================================================================================== 00:24:34.997 [2024-12-17T00:40:21.000Z] Total : 13853.88 54.12 0.00 0.00 9215.16 3455.53 14120.03 00:24:34.997 { 00:24:34.997 "results": [ 00:24:34.997 { 00:24:34.997 "job": "nvme0n1", 00:24:34.997 "core_mask": "0x2", 00:24:34.997 "workload": "randrw", 00:24:34.997 "percentage": 50, 00:24:34.997 "status": "finished", 00:24:34.998 "queue_depth": 128, 00:24:34.998 "io_size": 4096, 00:24:34.998 "runtime": 1.006938, 00:24:34.998 "iops": 13853.881768291592, 00:24:34.998 "mibps": 54.11672565738903, 00:24:34.998 "io_failed": 0, 00:24:34.998 "io_timeout": 0, 00:24:34.998 "avg_latency_us": 9215.1554419029, 00:24:34.998 "min_latency_us": 3455.5345454545454, 00:24:34.998 "max_latency_us": 14120.02909090909 00:24:34.998 } 00:24:34.998 ], 00:24:34.998 "core_count": 1 00:24:34.998 } 00:24:34.998 00:40:20 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:34.998 00:40:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:35.323 00:40:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:35.323 00:40:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:35.323 00:40:21 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:35.323 00:40:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:35.608 00:40:21 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:35.608 00:40:21 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:35.608 00:40:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:35.608 00:40:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:35.608 00:40:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:35.608 00:40:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.608 00:40:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:35.608 00:40:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.608 00:40:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:35.608 00:40:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:35.865 [2024-12-17 00:40:21.810705] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-12-17 00:40:21.810729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4e320 (107): Transport endpoint is not connected 00:24:35.865 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:35.865 [2024-12-17 00:40:21.811721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4e320 (9): Bad file descriptor 00:24:35.865 [2024-12-17 00:40:21.812719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:35.865 [2024-12-17 00:40:21.812901] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:35.865 [2024-12-17 00:40:21.813041] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:35.865 [2024-12-17 00:40:21.813170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:35.865 request: 00:24:35.865 { 00:24:35.865 "name": "nvme0", 00:24:35.865 "trtype": "tcp", 00:24:35.865 "traddr": "127.0.0.1", 00:24:35.865 "adrfam": "ipv4", 00:24:35.865 "trsvcid": "4420", 00:24:35.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:35.865 "prchk_reftag": false, 00:24:35.865 "prchk_guard": false, 00:24:35.865 "hdgst": false, 00:24:35.865 "ddgst": false, 00:24:35.865 "psk": "key1", 00:24:35.865 "allow_unrecognized_csi": false, 00:24:35.865 "method": "bdev_nvme_attach_controller", 00:24:35.865 "req_id": 1 00:24:35.865 } 00:24:35.865 Got JSON-RPC error response 00:24:35.865 response: 00:24:35.865 { 00:24:35.865 "code": -5, 00:24:35.865 "message": "Input/output error" 00:24:35.865 } 00:24:35.866 00:40:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:35.866 00:40:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:35.866 00:40:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:35.866 00:40:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:35.866 00:40:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:35.866 00:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:35.866 00:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:35.866 00:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:35.866 00:40:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:35.866 00:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:36.124 00:40:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:36.124 00:40:22 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:36.124 00:40:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:36.124 00:40:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:36.124 00:40:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:36.124 00:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:36.124 00:40:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:36.381 00:40:22 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:36.381 00:40:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:36.381 00:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:36.639 00:40:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:36.639 00:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:36.897 00:40:22 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:36.897 00:40:22 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:36.897 00:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:37.156 00:40:23 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:37.156 00:40:23 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.pxLNHc1F08 00:24:37.156 00:40:23 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:37.156 00:40:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:37.156 00:40:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:37.156 00:40:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:37.156 00:40:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:37.156 00:40:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:37.156 00:40:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:37.156 00:40:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:37.156 00:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:37.414 [2024-12-17 00:40:23.381618] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pxLNHc1F08': 0100660 00:24:37.414 [2024-12-17 00:40:23.381653] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:37.414 request: 00:24:37.414 { 00:24:37.414 "name": "key0", 00:24:37.414 "path": "/tmp/tmp.pxLNHc1F08", 00:24:37.414 "method": "keyring_file_add_key", 00:24:37.414 "req_id": 1 00:24:37.414 } 00:24:37.414 Got JSON-RPC error response 00:24:37.414 response: 00:24:37.414 { 00:24:37.414 "code": -1, 00:24:37.414 "message": "Operation not permitted" 00:24:37.414 } 00:24:37.414 00:40:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:37.414 00:40:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:37.414 00:40:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:37.414 00:40:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:37.414 00:40:23 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.pxLNHc1F08 00:24:37.414 00:40:23 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:37.414 00:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pxLNHc1F08 00:24:37.980 00:40:23 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.pxLNHc1F08 00:24:37.980 00:40:23 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:24:37.980 00:40:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:37.980 00:40:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:37.980 00:40:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:37.980 00:40:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:37.980 00:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:37.980 00:40:23 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:24:37.980 00:40:23 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:37.980 00:40:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:37.980 00:40:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:37.980 00:40:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:37.980 00:40:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:37.980 00:40:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:37.980 00:40:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:37.980 00:40:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:37.980 00:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:38.238 [2024-12-17 00:40:24.137725] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pxLNHc1F08': No such file or directory 00:24:38.238 [2024-12-17 00:40:24.137774] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:38.238 [2024-12-17 00:40:24.137815] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:38.238 [2024-12-17 00:40:24.137827] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:24:38.238 [2024-12-17 00:40:24.137839] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:38.238 [2024-12-17 00:40:24.137851] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:38.238 request: 00:24:38.238 { 00:24:38.238 "name": "nvme0", 00:24:38.238 "trtype": "tcp", 00:24:38.238 "traddr": "127.0.0.1", 00:24:38.238 "adrfam": "ipv4", 00:24:38.238 "trsvcid": "4420", 00:24:38.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:38.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:38.238 "prchk_reftag": false, 00:24:38.238 "prchk_guard": false, 00:24:38.238 "hdgst": false, 00:24:38.238 "ddgst": false, 00:24:38.238 "psk": "key0", 00:24:38.238 "allow_unrecognized_csi": false, 00:24:38.238 "method": "bdev_nvme_attach_controller", 00:24:38.238 "req_id": 1 00:24:38.238 } 00:24:38.238 Got JSON-RPC error response 00:24:38.238 response: 00:24:38.238 { 00:24:38.238 "code": -19, 00:24:38.238 "message": "No such device" 00:24:38.238 } 00:24:38.238 00:40:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:38.238 00:40:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.238 00:40:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.238 00:40:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.238 00:40:24 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:24:38.238 00:40:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:38.496 00:40:24 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:38.496 00:40:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:38.496 00:40:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:38.496 00:40:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:38.496 00:40:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:38.496 00:40:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:38.496 00:40:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VCq1Hrmbkr 00:24:38.496 00:40:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:38.496 00:40:24 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:38.496 00:40:24 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:38.496 00:40:24 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:38.496 00:40:24 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:24:38.496 00:40:24 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:38.496 00:40:24 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:38.754 00:40:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VCq1Hrmbkr 00:24:38.754 00:40:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VCq1Hrmbkr 00:24:38.754 00:40:24 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.VCq1Hrmbkr 00:24:38.754 00:40:24 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VCq1Hrmbkr 00:24:38.754 00:40:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VCq1Hrmbkr 00:24:39.012 00:40:24 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:39.012 00:40:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:39.269 nvme0n1 00:24:39.269 00:40:25 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:24:39.269 00:40:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:39.269 00:40:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:39.269 00:40:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:39.269 00:40:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:39.269 00:40:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:39.526 00:40:25 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:24:39.526 00:40:25 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:24:39.526 00:40:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:39.783 00:40:25 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:24:39.783 00:40:25 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:24:39.783 00:40:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:39.783 00:40:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:39.783 00:40:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:40.042 00:40:25 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:24:40.042 00:40:25 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:24:40.042 00:40:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:40.042 00:40:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:40.042 00:40:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:40.042 00:40:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:40.042 00:40:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:40.042 00:40:26 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:24:40.042 00:40:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:40.042 00:40:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:40.300 00:40:26 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:24:40.300 00:40:26 keyring_file -- keyring/file.sh@105 -- # jq length 00:24:40.300 00:40:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:40.558 00:40:26 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:24:40.558 00:40:26 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VCq1Hrmbkr 00:24:40.558 00:40:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VCq1Hrmbkr 00:24:40.816 00:40:26 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zmX8dMueYy 00:24:40.816 00:40:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zmX8dMueYy 00:24:41.074 00:40:27 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:41.074 00:40:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:41.332 nvme0n1 00:24:41.332 00:40:27 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:24:41.332 00:40:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:41.900 00:40:27 keyring_file -- keyring/file.sh@113 -- # config='{ 00:24:41.900 "subsystems": [ 00:24:41.900 { 00:24:41.900 "subsystem": "keyring", 00:24:41.900 "config": [ 00:24:41.900 { 00:24:41.900 "method": "keyring_file_add_key", 00:24:41.900 "params": { 00:24:41.900 "name": "key0", 00:24:41.900 "path": "/tmp/tmp.VCq1Hrmbkr" 00:24:41.900 } 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "method": "keyring_file_add_key", 00:24:41.900 "params": { 00:24:41.900 "name": "key1", 00:24:41.900 "path": "/tmp/tmp.zmX8dMueYy" 00:24:41.900 } 00:24:41.900 } 00:24:41.900 ] 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "subsystem": "iobuf", 00:24:41.900 "config": [ 00:24:41.900 { 00:24:41.900 "method": "iobuf_set_options", 00:24:41.900 "params": { 00:24:41.900 "small_pool_count": 8192, 00:24:41.900 "large_pool_count": 1024, 00:24:41.900 "small_bufsize": 8192, 00:24:41.900 "large_bufsize": 135168 00:24:41.900 } 00:24:41.900 } 00:24:41.900 ] 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "subsystem": "sock", 00:24:41.900 "config": [ 00:24:41.900 { 00:24:41.900 "method": "sock_set_default_impl", 00:24:41.900 "params": { 00:24:41.900 "impl_name": "uring" 00:24:41.900 } 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "method": "sock_impl_set_options", 00:24:41.900 "params": { 00:24:41.900 "impl_name": "ssl", 00:24:41.900 "recv_buf_size": 4096, 00:24:41.900 "send_buf_size": 4096, 00:24:41.900 "enable_recv_pipe": true, 00:24:41.900 "enable_quickack": false, 00:24:41.900 "enable_placement_id": 0, 00:24:41.900 "enable_zerocopy_send_server": true, 00:24:41.900 "enable_zerocopy_send_client": false, 00:24:41.900 "zerocopy_threshold": 0, 00:24:41.900 "tls_version": 0, 00:24:41.900 "enable_ktls": false 00:24:41.900 } 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "method": "sock_impl_set_options", 00:24:41.900 "params": { 00:24:41.900 "impl_name": "posix", 00:24:41.900 "recv_buf_size": 2097152, 00:24:41.900 "send_buf_size": 2097152, 00:24:41.900 "enable_recv_pipe": true, 00:24:41.900 "enable_quickack": false, 00:24:41.900 "enable_placement_id": 0, 00:24:41.900 "enable_zerocopy_send_server": true, 00:24:41.900 "enable_zerocopy_send_client": false, 00:24:41.900 "zerocopy_threshold": 0, 00:24:41.900 "tls_version": 0, 00:24:41.900 "enable_ktls": false 00:24:41.900 } 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "method": "sock_impl_set_options", 00:24:41.900 "params": { 00:24:41.900 "impl_name": "uring", 00:24:41.900 "recv_buf_size": 2097152, 00:24:41.900 "send_buf_size": 2097152, 00:24:41.900 "enable_recv_pipe": true, 00:24:41.900 "enable_quickack": false, 00:24:41.900 "enable_placement_id": 0, 00:24:41.900 "enable_zerocopy_send_server": false, 00:24:41.900 "enable_zerocopy_send_client": false, 00:24:41.900 "zerocopy_threshold": 0, 00:24:41.900 "tls_version": 0, 00:24:41.900 "enable_ktls": false 00:24:41.900 } 00:24:41.900 } 00:24:41.900 ] 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "subsystem": "vmd", 00:24:41.900 "config": [] 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "subsystem": "accel", 00:24:41.900 "config": [ 00:24:41.900 { 00:24:41.900 "method": "accel_set_options", 00:24:41.900 "params": { 00:24:41.900 "small_cache_size": 128, 00:24:41.900 "large_cache_size": 16, 00:24:41.900 "task_count": 2048, 00:24:41.900 "sequence_count": 2048, 00:24:41.900 "buf_count": 2048 00:24:41.900 } 00:24:41.900 } 00:24:41.900 ] 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "subsystem": "bdev", 00:24:41.900 "config": [ 00:24:41.900 { 00:24:41.900 "method": "bdev_set_options", 00:24:41.900 "params": { 00:24:41.900 "bdev_io_pool_size": 65535, 00:24:41.900 "bdev_io_cache_size": 256, 00:24:41.900 "bdev_auto_examine": true, 00:24:41.900 "iobuf_small_cache_size": 128, 00:24:41.900 "iobuf_large_cache_size": 16 00:24:41.900 } 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "method": "bdev_raid_set_options", 00:24:41.900 "params": { 00:24:41.900 "process_window_size_kb": 1024, 00:24:41.900 "process_max_bandwidth_mb_sec": 0 00:24:41.900 } 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "method": "bdev_iscsi_set_options", 00:24:41.900 "params": { 00:24:41.900 "timeout_sec": 30 00:24:41.900 } 00:24:41.900 }, 00:24:41.900 { 00:24:41.900 "method": "bdev_nvme_set_options", 00:24:41.900 "params": { 00:24:41.900 "action_on_timeout": "none", 00:24:41.900 "timeout_us": 0, 00:24:41.900 "timeout_admin_us": 0, 00:24:41.900 "keep_alive_timeout_ms": 10000, 00:24:41.900 "arbitration_burst": 0, 00:24:41.900 "low_priority_weight": 0, 00:24:41.900 "medium_priority_weight": 0, 00:24:41.900 "high_priority_weight": 0, 00:24:41.900 "nvme_adminq_poll_period_us": 10000, 00:24:41.900 "nvme_ioq_poll_period_us": 0, 00:24:41.900 "io_queue_requests": 512, 00:24:41.900 "delay_cmd_submit": true, 00:24:41.900 "transport_retry_count": 4, 00:24:41.900 "bdev_retry_count": 3, 00:24:41.900 "transport_ack_timeout": 0, 00:24:41.900 "ctrlr_loss_timeout_sec": 0, 00:24:41.900 "reconnect_delay_sec": 0, 00:24:41.900 "fast_io_fail_timeout_sec": 0, 00:24:41.900 "disable_auto_failback": false, 00:24:41.900 "generate_uuids": false, 00:24:41.900 "transport_tos": 0, 00:24:41.900 "nvme_error_stat": false, 00:24:41.900 "rdma_srq_size": 0, 00:24:41.900 "io_path_stat": false, 00:24:41.900 "allow_accel_sequence": false, 00:24:41.900 "rdma_max_cq_size": 0, 00:24:41.900 "rdma_cm_event_timeout_ms": 0, 00:24:41.900 "dhchap_digests": [ 00:24:41.900 "sha256", 00:24:41.900 "sha384", 00:24:41.900 "sha512" 00:24:41.901 ], 00:24:41.901 "dhchap_dhgroups": [ 00:24:41.901 "null", 00:24:41.901 "ffdhe2048", 00:24:41.901 "ffdhe3072", 00:24:41.901 "ffdhe4096", 00:24:41.901 "ffdhe6144", 00:24:41.901 "ffdhe8192" 00:24:41.901 ] 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "bdev_nvme_attach_controller", 00:24:41.901 "params": { 00:24:41.901 "name": "nvme0", 00:24:41.901 "trtype": "TCP", 00:24:41.901 "adrfam": "IPv4", 00:24:41.901 "traddr": "127.0.0.1", 00:24:41.901 "trsvcid": "4420", 00:24:41.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.901 "prchk_reftag": false, 00:24:41.901 "prchk_guard": false, 00:24:41.901 "ctrlr_loss_timeout_sec": 0, 00:24:41.901 "reconnect_delay_sec": 0, 00:24:41.901 "fast_io_fail_timeout_sec": 0, 00:24:41.901 "psk": "key0", 00:24:41.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:41.901 "hdgst": false, 00:24:41.901 "ddgst": false 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "bdev_nvme_set_hotplug", 00:24:41.901 "params": { 00:24:41.901 "period_us": 100000, 00:24:41.901 "enable": false 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "bdev_wait_for_examine" 00:24:41.901 } 00:24:41.901 ] 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "subsystem": "nbd", 00:24:41.901 "config": [] 00:24:41.901 } 00:24:41.901 ] 00:24:41.901 }' 00:24:41.901 00:40:27 keyring_file -- keyring/file.sh@115 -- # killprocess 99450 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99450 ']' 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99450 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99450 00:24:41.901 killing process with pid 99450 00:24:41.901 Received shutdown signal, test time was about 1.000000 seconds 00:24:41.901 00:24:41.901 Latency(us) 00:24:41.901 [2024-12-17T00:40:27.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.901 [2024-12-17T00:40:27.904Z] =================================================================================================================== 00:24:41.901 [2024-12-17T00:40:27.904Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99450' 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@969 -- # kill 99450 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@974 -- # wait 99450 00:24:41.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.901 00:40:27 keyring_file -- keyring/file.sh@118 -- # bperfpid=99696 00:24:41.901 00:40:27 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:41.901 00:40:27 keyring_file -- keyring/file.sh@120 -- # waitforlisten 99696 /var/tmp/bperf.sock 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99696 ']' 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.901 00:40:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.901 00:40:27 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:24:41.901 "subsystems": [ 00:24:41.901 { 00:24:41.901 "subsystem": "keyring", 00:24:41.901 "config": [ 00:24:41.901 { 00:24:41.901 "method": "keyring_file_add_key", 00:24:41.901 "params": { 00:24:41.901 "name": "key0", 00:24:41.901 "path": "/tmp/tmp.VCq1Hrmbkr" 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "keyring_file_add_key", 00:24:41.901 "params": { 00:24:41.901 "name": "key1", 00:24:41.901 "path": "/tmp/tmp.zmX8dMueYy" 00:24:41.901 } 00:24:41.901 } 00:24:41.901 ] 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "subsystem": "iobuf", 00:24:41.901 "config": [ 00:24:41.901 { 00:24:41.901 "method": "iobuf_set_options", 00:24:41.901 "params": { 00:24:41.901 "small_pool_count": 8192, 00:24:41.901 "large_pool_count": 1024, 00:24:41.901 "small_bufsize": 8192, 00:24:41.901 "large_bufsize": 135168 00:24:41.901 } 00:24:41.901 } 00:24:41.901 ] 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "subsystem": "sock", 00:24:41.901 "config": [ 00:24:41.901 { 00:24:41.901 "method": "sock_set_default_impl", 00:24:41.901 "params": { 00:24:41.901 "impl_name": "uring" 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "sock_impl_set_options", 00:24:41.901 "params": { 00:24:41.901 "impl_name": "ssl", 00:24:41.901 "recv_buf_size": 4096, 00:24:41.901 "send_buf_size": 4096, 00:24:41.901 "enable_recv_pipe": true, 00:24:41.901 "enable_quickack": false, 00:24:41.901 "enable_placement_id": 0, 00:24:41.901 "enable_zerocopy_send_server": true, 00:24:41.901 "enable_zerocopy_send_client": false, 00:24:41.901 "zerocopy_threshold": 0, 00:24:41.901 "tls_version": 0, 00:24:41.901 "enable_ktls": false 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "sock_impl_set_options", 00:24:41.901 "params": { 00:24:41.901 "impl_name": "posix", 00:24:41.901 "recv_buf_size": 2097152, 00:24:41.901 "send_buf_size": 2097152, 00:24:41.901 "enable_recv_pipe": true, 00:24:41.901 "enable_quickack": false, 00:24:41.901 "enable_placement_id": 0, 00:24:41.901 "enable_zerocopy_send_server": true, 00:24:41.901 "enable_zerocopy_send_client": false, 00:24:41.901 "zerocopy_threshold": 0, 00:24:41.901 "tls_version": 0, 00:24:41.901 "enable_ktls": false 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "sock_impl_set_options", 00:24:41.901 "params": { 00:24:41.901 "impl_name": "uring", 00:24:41.901 "recv_buf_size": 2097152, 00:24:41.901 "send_buf_size": 2097152, 00:24:41.901 "enable_recv_pipe": true, 00:24:41.901 "enable_quickack": false, 00:24:41.901 "enable_placement_id": 0, 00:24:41.901 "enable_zerocopy_send_server": false, 00:24:41.901 "enable_zerocopy_send_client": false, 00:24:41.901 "zerocopy_threshold": 0, 00:24:41.901 "tls_version": 0, 00:24:41.901 "enable_ktls": false 00:24:41.901 } 00:24:41.901 } 00:24:41.901 ] 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "subsystem": "vmd", 00:24:41.901 "config": [] 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "subsystem": "accel", 00:24:41.901 "config": [ 00:24:41.901 { 00:24:41.901 "method": "accel_set_options", 00:24:41.901 "params": { 00:24:41.901 "small_cache_size": 128, 00:24:41.901 "large_cache_size": 16, 00:24:41.901 "task_count": 2048, 00:24:41.901 "sequence_count": 2048, 00:24:41.901 "buf_count": 2048 00:24:41.901 } 00:24:41.901 } 00:24:41.901 ] 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "subsystem": "bdev", 00:24:41.901 "config": [ 00:24:41.901 { 00:24:41.901 "method": "bdev_set_options", 00:24:41.901 "params": { 00:24:41.901 "bdev_io_pool_size": 65535, 00:24:41.901 "bdev_io_cache_size": 256, 00:24:41.901 "bdev_auto_examine": true, 00:24:41.901 "iobuf_small_cache_size": 128, 00:24:41.901 "iobuf_large_cache_size": 16 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "bdev_raid_set_options", 00:24:41.901 "params": { 00:24:41.901 "process_window_size_kb": 1024, 00:24:41.901 "process_max_bandwidth_mb_sec": 0 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "bdev_iscsi_set_options", 00:24:41.901 "params": { 00:24:41.901 "timeout_sec": 30 00:24:41.901 } 00:24:41.901 }, 00:24:41.901 { 00:24:41.901 "method": "bdev_nvme_set_options", 00:24:41.901 "params": { 00:24:41.901 "action_on_timeout": "none", 00:24:41.901 "timeout_us": 0, 00:24:41.901 "timeout_admin_us": 0, 00:24:41.901 "keep_alive_timeout_ms": 10000, 00:24:41.901 "arbitration_burst": 0, 00:24:41.901 "low_priority_weight": 0, 00:24:41.901 "medium_priority_weight": 0, 00:24:41.901 "high_priority_weight": 0, 00:24:41.901 "nvme_adminq_poll_period_us": 10000, 00:24:41.901 "nvme_ioq_poll_period_us": 0, 00:24:41.901 "io_queue_requests": 512, 00:24:41.901 "delay_cmd_submit": true, 00:24:41.901 "transport_retry_count": 4, 00:24:41.901 "bdev_retry_count": 3, 00:24:41.901 "transport_ack_timeout": 0, 00:24:41.901 "ctrlr_loss_timeout_sec": 0, 00:24:41.901 "reconnect_delay_sec": 0, 00:24:41.902 "fast_io_fail_timeout_sec": 0, 00:24:41.902 "disable_auto_failback": false, 00:24:41.902 "generate_uuids": false, 00:24:41.902 "transport_tos": 0, 00:24:41.902 "nvme_error_stat": false, 00:24:41.902 "rdma_srq_size": 0, 00:24:41.902 "io_path_stat": false, 00:24:41.902 "allow_accel_sequence": false, 00:24:41.902 "rdma_max_cq_size": 0, 00:24:41.902 "rdma_cm_event_timeout_ms": 0, 00:24:41.902 "dhchap_digests": [ 00:24:41.902 "sha256", 00:24:41.902 "sha384", 00:24:41.902 "sha512" 00:24:41.902 ], 00:24:41.902 "dhchap_dhgroups": [ 00:24:41.902 "null", 00:24:41.902 "ffdhe2048", 00:24:41.902 "ffdhe3072", 00:24:41.902 "ffdhe4096", 00:24:41.902 "ffdhe6144", 00:24:41.902 "ffdhe8192" 00:24:41.902 ] 00:24:41.902 } 00:24:41.902 }, 00:24:41.902 { 00:24:41.902 "method": "bdev_nvme_attach_controller", 00:24:41.902 "params": { 00:24:41.902 "name": "nvme0", 00:24:41.902 "trtype": "TCP", 00:24:41.902 "adrfam": "IPv4", 00:24:41.902 "traddr": "127.0.0.1", 00:24:41.902 "trsvcid": "4420", 00:24:41.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.902 "prchk_reftag": false, 00:24:41.902 "prchk_guard": false, 00:24:41.902 "ctrlr_loss_timeout_sec": 0, 00:24:41.902 "reconnect_delay_sec": 0, 00:24:41.902 "fast_io_fail_timeout_sec": 0, 00:24:41.902 "psk": "key0", 00:24:41.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:41.902 "hdgst": false, 00:24:41.902 "ddgst": false 00:24:41.902 } 00:24:41.902 }, 00:24:41.902 { 00:24:41.902 "method": "bdev_nvme_set_hotplug", 00:24:41.902 "params": { 00:24:41.902 "period_us": 100000, 00:24:41.902 "enable": false 00:24:41.902 } 00:24:41.902 }, 00:24:41.902 { 00:24:41.902 "method": "bdev_wait_for_examine" 00:24:41.902 } 00:24:41.902 ] 00:24:41.902 }, 00:24:41.902 { 00:24:41.902 "subsystem": "nbd", 00:24:41.902 "config": [] 00:24:41.902 } 00:24:41.902 ] 00:24:41.902 }' 00:24:41.902 00:40:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.902 00:40:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.902 00:40:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:41.902 [2024-12-17 00:40:27.853675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:41.902 [2024-12-17 00:40:27.853957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99696 ] 00:24:42.160 [2024-12-17 00:40:27.983540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.160 [2024-12-17 00:40:28.016147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.160 [2024-12-17 00:40:28.125199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:42.160 [2024-12-17 00:40:28.161113] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.095 00:40:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.095 00:40:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:43.095 00:40:28 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:24:43.095 00:40:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.095 00:40:28 keyring_file -- keyring/file.sh@121 -- # jq length 00:24:43.354 00:40:29 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:43.354 00:40:29 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:24:43.354 00:40:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:43.354 00:40:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:43.354 00:40:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:43.354 00:40:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.354 00:40:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.612 00:40:29 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:24:43.612 00:40:29 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:24:43.612 00:40:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:43.612 00:40:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:43.612 00:40:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.612 00:40:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.612 00:40:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:43.612 00:40:29 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:24:43.612 00:40:29 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:24:43.612 00:40:29 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:24:43.612 00:40:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:43.870 00:40:29 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:24:43.870 00:40:29 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:43.870 00:40:29 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VCq1Hrmbkr /tmp/tmp.zmX8dMueYy 00:24:43.870 00:40:29 keyring_file -- keyring/file.sh@20 -- # killprocess 99696 00:24:43.870 00:40:29 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99696 ']' 00:24:43.870 00:40:29 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99696 00:24:43.870 00:40:29 keyring_file -- common/autotest_common.sh@955 -- # uname 00:24:43.870 00:40:29 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:43.870 00:40:29 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99696 00:24:44.128 killing process with pid 99696 00:24:44.128 Received shutdown signal, test time was about 1.000000 seconds 00:24:44.128 00:24:44.128 Latency(us) 00:24:44.128 [2024-12-17T00:40:30.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.128 [2024-12-17T00:40:30.131Z] =================================================================================================================== 00:24:44.128 [2024-12-17T00:40:30.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:44.128 00:40:29 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:44.128 00:40:29 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:44.128 00:40:29 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99696' 00:24:44.128 00:40:29 keyring_file -- common/autotest_common.sh@969 -- # kill 99696 00:24:44.128 00:40:29 keyring_file -- common/autotest_common.sh@974 -- # wait 99696 00:24:44.128 00:40:30 keyring_file -- keyring/file.sh@21 -- # killprocess 99445 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99445 ']' 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99445 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@955 -- # uname 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99445 00:24:44.129 killing process with pid 99445 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99445' 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@969 -- # kill 99445 00:24:44.129 00:40:30 keyring_file -- common/autotest_common.sh@974 -- # wait 99445 00:24:44.387 ************************************ 00:24:44.387 END TEST keyring_file 00:24:44.387 ************************************ 00:24:44.387 00:24:44.387 real 0m14.139s 00:24:44.387 user 0m36.608s 00:24:44.387 sys 0m2.594s 00:24:44.387 00:40:30 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:44.387 00:40:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:44.387 00:40:30 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:24:44.387 00:40:30 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:44.387 00:40:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:44.387 00:40:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:44.387 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:24:44.387 ************************************ 00:24:44.387 START TEST keyring_linux 00:24:44.387 ************************************ 00:24:44.387 00:40:30 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:44.387 Joined session keyring: 676727422 00:24:44.646 * Looking for test storage... 00:24:44.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@345 -- # : 1 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.646 00:40:30 keyring_linux -- scripts/common.sh@368 -- # return 0 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.646 --rc genhtml_branch_coverage=1 00:24:44.646 --rc genhtml_function_coverage=1 00:24:44.646 --rc genhtml_legend=1 00:24:44.646 --rc geninfo_all_blocks=1 00:24:44.646 --rc geninfo_unexecuted_blocks=1 00:24:44.646 00:24:44.646 ' 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.646 --rc genhtml_branch_coverage=1 00:24:44.646 --rc genhtml_function_coverage=1 00:24:44.646 --rc genhtml_legend=1 00:24:44.646 --rc geninfo_all_blocks=1 00:24:44.646 --rc geninfo_unexecuted_blocks=1 00:24:44.646 00:24:44.646 ' 00:24:44.646 00:40:30 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:44.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.646 --rc genhtml_branch_coverage=1 00:24:44.646 --rc genhtml_function_coverage=1 00:24:44.647 --rc genhtml_legend=1 00:24:44.647 --rc geninfo_all_blocks=1 00:24:44.647 --rc geninfo_unexecuted_blocks=1 00:24:44.647 00:24:44.647 ' 00:24:44.647 00:40:30 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:44.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.647 --rc genhtml_branch_coverage=1 00:24:44.647 --rc genhtml_function_coverage=1 00:24:44.647 --rc genhtml_legend=1 00:24:44.647 --rc geninfo_all_blocks=1 00:24:44.647 --rc geninfo_unexecuted_blocks=1 00:24:44.647 00:24:44.647 ' 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93817295-c2e4-400f-aefe-caa93fc06858 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=93817295-c2e4-400f-aefe-caa93fc06858 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.647 00:40:30 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.647 00:40:30 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.647 00:40:30 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.647 00:40:30 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.647 00:40:30 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.647 00:40:30 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.647 00:40:30 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.647 00:40:30 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:44.647 00:40:30 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.647 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@729 -- # python - 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:44.647 /tmp/:spdk-test:key0 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:24:44.647 00:40:30 keyring_linux -- nvmf/common.sh@729 -- # python - 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:44.647 /tmp/:spdk-test:key1 00:24:44.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.647 00:40:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=99812 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.647 00:40:30 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 99812 00:24:44.647 00:40:30 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99812 ']' 00:24:44.647 00:40:30 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.647 00:40:30 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.647 00:40:30 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.647 00:40:30 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.647 00:40:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:44.906 [2024-12-17 00:40:30.692380] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:44.906 [2024-12-17 00:40:30.692679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99812 ] 00:24:44.906 [2024-12-17 00:40:30.829159] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.906 [2024-12-17 00:40:30.861906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.906 [2024-12-17 00:40:30.894393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:24:45.165 00:40:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:45.165 [2024-12-17 00:40:31.009237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.165 null0 00:24:45.165 [2024-12-17 00:40:31.041216] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:45.165 [2024-12-17 00:40:31.041517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.165 00:40:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:45.165 1018039720 00:24:45.165 00:40:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:45.165 939143855 00:24:45.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:45.165 00:40:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=99823 00:24:45.165 00:40:31 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:45.165 00:40:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 99823 /var/tmp/bperf.sock 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99823 ']' 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.165 00:40:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:45.165 [2024-12-17 00:40:31.113612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:45.165 [2024-12-17 00:40:31.113902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99823 ] 00:24:45.423 [2024-12-17 00:40:31.240715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.423 [2024-12-17 00:40:31.274823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.423 00:40:31 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.424 00:40:31 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:24:45.424 00:40:31 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:45.424 00:40:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:45.682 00:40:31 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:45.682 00:40:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:45.940 [2024-12-17 00:40:31.753699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:45.940 00:40:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:45.940 00:40:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:46.199 [2024-12-17 00:40:31.988408] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:46.199 nvme0n1 00:24:46.199 00:40:32 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:46.199 00:40:32 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:46.199 00:40:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:46.199 00:40:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:46.199 00:40:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:46.199 00:40:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.457 00:40:32 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:46.457 00:40:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:46.457 00:40:32 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:46.457 00:40:32 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:46.457 00:40:32 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.458 00:40:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.458 00:40:32 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:46.716 00:40:32 keyring_linux -- keyring/linux.sh@25 -- # sn=1018039720 00:24:46.716 00:40:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:46.716 00:40:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:46.716 00:40:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 1018039720 == \1\0\1\8\0\3\9\7\2\0 ]] 00:24:46.716 00:40:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1018039720 00:24:46.716 00:40:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:46.716 00:40:32 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:46.716 Running I/O for 1 seconds... 00:24:48.092 14769.00 IOPS, 57.69 MiB/s 00:24:48.092 Latency(us) 00:24:48.092 [2024-12-17T00:40:34.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.092 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:48.092 nvme0n1 : 1.01 14788.27 57.77 0.00 0.00 8620.56 3902.37 12809.31 00:24:48.092 [2024-12-17T00:40:34.095Z] =================================================================================================================== 00:24:48.092 [2024-12-17T00:40:34.095Z] Total : 14788.27 57.77 0.00 0.00 8620.56 3902.37 12809.31 00:24:48.092 { 00:24:48.092 "results": [ 00:24:48.092 { 00:24:48.092 "job": "nvme0n1", 00:24:48.092 "core_mask": "0x2", 00:24:48.092 "workload": "randread", 00:24:48.092 "status": "finished", 00:24:48.092 "queue_depth": 128, 00:24:48.092 "io_size": 4096, 00:24:48.092 "runtime": 1.00742, 00:24:48.092 "iops": 14788.271028965079, 00:24:48.092 "mibps": 57.76668370689484, 00:24:48.092 "io_failed": 0, 00:24:48.092 "io_timeout": 0, 00:24:48.092 "avg_latency_us": 8620.557603094985, 00:24:48.092 "min_latency_us": 3902.370909090909, 00:24:48.092 "max_latency_us": 12809.309090909092 00:24:48.092 } 00:24:48.092 ], 00:24:48.092 "core_count": 1 00:24:48.092 } 00:24:48.092 00:40:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:48.092 00:40:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:48.092 00:40:34 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:48.092 00:40:34 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:48.092 00:40:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:48.092 00:40:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:48.092 00:40:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:48.092 00:40:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:48.351 00:40:34 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:48.351 00:40:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:48.351 00:40:34 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:48.351 00:40:34 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:48.351 00:40:34 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:24:48.351 00:40:34 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:48.351 00:40:34 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:48.351 00:40:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.351 00:40:34 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:48.351 00:40:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:48.351 00:40:34 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:48.351 00:40:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:48.610 [2024-12-17 00:40:34.484280] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:48.610 [2024-12-17 00:40:34.485141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f7f30 (107): Transport endpoint is not connected 00:24:48.610 [2024-12-17 00:40:34.486118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f7f30 (9): Bad file descriptor 00:24:48.610 [2024-12-17 00:40:34.487114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.610 [2024-12-17 00:40:34.487138] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:48.610 [2024-12-17 00:40:34.487149] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:48.610 [2024-12-17 00:40:34.487159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.610 request: 00:24:48.610 { 00:24:48.610 "name": "nvme0", 00:24:48.610 "trtype": "tcp", 00:24:48.610 "traddr": "127.0.0.1", 00:24:48.610 "adrfam": "ipv4", 00:24:48.610 "trsvcid": "4420", 00:24:48.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:48.610 "prchk_reftag": false, 00:24:48.610 "prchk_guard": false, 00:24:48.610 "hdgst": false, 00:24:48.610 "ddgst": false, 00:24:48.610 "psk": ":spdk-test:key1", 00:24:48.610 "allow_unrecognized_csi": false, 00:24:48.610 "method": "bdev_nvme_attach_controller", 00:24:48.610 "req_id": 1 00:24:48.610 } 00:24:48.610 Got JSON-RPC error response 00:24:48.610 response: 00:24:48.610 { 00:24:48.610 "code": -5, 00:24:48.610 "message": "Input/output error" 00:24:48.610 } 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@33 -- # sn=1018039720 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1018039720 00:24:48.610 1 links removed 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@33 -- # sn=939143855 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 939143855 00:24:48.610 1 links removed 00:24:48.610 00:40:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 99823 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99823 ']' 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99823 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99823 00:24:48.610 killing process with pid 99823 00:24:48.610 Received shutdown signal, test time was about 1.000000 seconds 00:24:48.610 00:24:48.610 Latency(us) 00:24:48.610 [2024-12-17T00:40:34.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.610 [2024-12-17T00:40:34.613Z] =================================================================================================================== 00:24:48.610 [2024-12-17T00:40:34.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99823' 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 99823 00:24:48.610 00:40:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 99823 00:24:48.869 00:40:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 99812 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99812 ']' 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99812 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99812 00:24:48.869 killing process with pid 99812 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99812' 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 99812 00:24:48.869 00:40:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 99812 00:24:49.127 ************************************ 00:24:49.127 END TEST keyring_linux 00:24:49.127 ************************************ 00:24:49.127 00:24:49.127 real 0m4.584s 00:24:49.127 user 0m9.250s 00:24:49.127 sys 0m1.288s 00:24:49.127 00:40:34 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.127 00:40:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:49.127 00:40:34 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:24:49.127 00:40:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:49.127 00:40:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:49.127 00:40:34 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:24:49.127 00:40:34 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:24:49.127 00:40:34 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:24:49.127 00:40:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:49.128 00:40:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:49.128 00:40:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:49.128 00:40:34 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:24:49.128 00:40:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:49.128 00:40:34 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:24:49.128 00:40:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:49.128 00:40:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:49.128 00:40:34 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:24:49.128 00:40:34 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:24:49.128 00:40:34 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:24:49.128 00:40:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:49.128 00:40:34 -- common/autotest_common.sh@10 -- # set +x 00:24:49.128 00:40:34 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:24:49.128 00:40:34 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:49.128 00:40:34 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:49.128 00:40:34 -- common/autotest_common.sh@10 -- # set +x 00:24:51.031 INFO: APP EXITING 00:24:51.031 INFO: killing all VMs 00:24:51.031 INFO: killing vhost app 00:24:51.031 INFO: EXIT DONE 00:24:51.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.599 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:51.599 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:52.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:52.167 Cleaning 00:24:52.167 Removing: /var/run/dpdk/spdk0/config 00:24:52.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:52.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:52.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:52.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:52.426 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:52.426 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:52.426 Removing: /var/run/dpdk/spdk1/config 00:24:52.426 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:52.426 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:52.426 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:52.426 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:52.426 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:52.426 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:52.426 Removing: /var/run/dpdk/spdk2/config 00:24:52.426 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:52.426 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:52.426 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:52.426 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:52.426 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:52.426 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:52.426 Removing: /var/run/dpdk/spdk3/config 00:24:52.426 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:52.426 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:52.426 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:52.426 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:52.426 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:52.426 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:52.426 Removing: /var/run/dpdk/spdk4/config 00:24:52.426 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:52.426 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:52.426 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:52.426 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:52.426 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:52.426 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:52.426 Removing: /dev/shm/nvmf_trace.0 00:24:52.426 Removing: /dev/shm/spdk_tgt_trace.pid68941 00:24:52.426 Removing: /var/run/dpdk/spdk0 00:24:52.426 Removing: /var/run/dpdk/spdk1 00:24:52.426 Removing: /var/run/dpdk/spdk2 00:24:52.426 Removing: /var/run/dpdk/spdk3 00:24:52.426 Removing: /var/run/dpdk/spdk4 00:24:52.426 Removing: /var/run/dpdk/spdk_pid68788 00:24:52.426 Removing: /var/run/dpdk/spdk_pid68941 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69134 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69215 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69243 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69352 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69370 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69504 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69700 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69848 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69926 00:24:52.426 Removing: /var/run/dpdk/spdk_pid69997 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70089 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70161 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70194 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70229 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70299 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70391 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70826 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70865 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70910 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70918 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70980 00:24:52.426 Removing: /var/run/dpdk/spdk_pid70988 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71042 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71045 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71095 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71101 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71141 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71152 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71284 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71314 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71397 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71723 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71735 00:24:52.426 Removing: /var/run/dpdk/spdk_pid71766 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71780 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71795 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71814 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71828 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71843 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71862 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71870 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71886 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71905 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71918 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71934 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71953 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71966 00:24:52.686 Removing: /var/run/dpdk/spdk_pid71982 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72001 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72014 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72030 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72060 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72074 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72106 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72173 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72201 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72211 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72238 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72249 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72251 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72293 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72307 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72334 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72345 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72349 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72358 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72368 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72372 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72387 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72391 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72419 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72446 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72455 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72484 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72488 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72501 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72536 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72553 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72574 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72587 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72589 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72591 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72604 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72606 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72619 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72621 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72703 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72745 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72852 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72891 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72931 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72945 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72967 00:24:52.686 Removing: /var/run/dpdk/spdk_pid72976 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73013 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73029 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73101 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73117 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73160 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73225 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73275 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73302 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73396 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73439 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73471 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73697 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73789 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73818 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73842 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73881 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73909 00:24:52.686 Removing: /var/run/dpdk/spdk_pid73948 00:24:52.945 Removing: /var/run/dpdk/spdk_pid73974 00:24:52.945 Removing: /var/run/dpdk/spdk_pid74373 00:24:52.945 Removing: /var/run/dpdk/spdk_pid74413 00:24:52.945 Removing: /var/run/dpdk/spdk_pid74744 00:24:52.945 Removing: /var/run/dpdk/spdk_pid75203 00:24:52.945 Removing: /var/run/dpdk/spdk_pid75476 00:24:52.945 Removing: /var/run/dpdk/spdk_pid76309 00:24:52.945 Removing: /var/run/dpdk/spdk_pid77216 00:24:52.945 Removing: /var/run/dpdk/spdk_pid77328 00:24:52.945 Removing: /var/run/dpdk/spdk_pid77401 00:24:52.945 Removing: /var/run/dpdk/spdk_pid78810 00:24:52.945 Removing: /var/run/dpdk/spdk_pid79121 00:24:52.945 Removing: /var/run/dpdk/spdk_pid82811 00:24:52.945 Removing: /var/run/dpdk/spdk_pid83181 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83286 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83419 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83440 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83461 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83482 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83574 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83705 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83848 00:24:52.946 Removing: /var/run/dpdk/spdk_pid83928 00:24:52.946 Removing: /var/run/dpdk/spdk_pid84116 00:24:52.946 Removing: /var/run/dpdk/spdk_pid84179 00:24:52.946 Removing: /var/run/dpdk/spdk_pid84264 00:24:52.946 Removing: /var/run/dpdk/spdk_pid84616 00:24:52.946 Removing: /var/run/dpdk/spdk_pid85013 00:24:52.946 Removing: /var/run/dpdk/spdk_pid85014 00:24:52.946 Removing: /var/run/dpdk/spdk_pid85015 00:24:52.946 Removing: /var/run/dpdk/spdk_pid85269 00:24:52.946 Removing: /var/run/dpdk/spdk_pid85511 00:24:52.946 Removing: /var/run/dpdk/spdk_pid85519 00:24:52.946 Removing: /var/run/dpdk/spdk_pid87887 00:24:52.946 Removing: /var/run/dpdk/spdk_pid87893 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88212 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88226 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88244 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88276 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88281 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88364 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88373 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88480 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88483 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88590 00:24:52.946 Removing: /var/run/dpdk/spdk_pid88593 00:24:52.946 Removing: /var/run/dpdk/spdk_pid89039 00:24:52.946 Removing: /var/run/dpdk/spdk_pid89082 00:24:52.946 Removing: /var/run/dpdk/spdk_pid89191 00:24:52.946 Removing: /var/run/dpdk/spdk_pid89270 00:24:52.946 Removing: /var/run/dpdk/spdk_pid89617 00:24:52.946 Removing: /var/run/dpdk/spdk_pid89806 00:24:52.946 Removing: /var/run/dpdk/spdk_pid90225 00:24:52.946 Removing: /var/run/dpdk/spdk_pid90771 00:24:52.946 Removing: /var/run/dpdk/spdk_pid91610 00:24:52.946 Removing: /var/run/dpdk/spdk_pid92246 00:24:52.946 Removing: /var/run/dpdk/spdk_pid92248 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94263 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94314 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94361 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94415 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94515 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94571 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94627 00:24:52.946 Removing: /var/run/dpdk/spdk_pid94674 00:24:52.946 Removing: /var/run/dpdk/spdk_pid95038 00:24:52.946 Removing: /var/run/dpdk/spdk_pid96242 00:24:52.946 Removing: /var/run/dpdk/spdk_pid96383 00:24:52.946 Removing: /var/run/dpdk/spdk_pid96626 00:24:52.946 Removing: /var/run/dpdk/spdk_pid97216 00:24:52.946 Removing: /var/run/dpdk/spdk_pid97377 00:24:52.946 Removing: /var/run/dpdk/spdk_pid97528 00:24:52.946 Removing: /var/run/dpdk/spdk_pid97621 00:24:52.946 Removing: /var/run/dpdk/spdk_pid97784 00:24:52.946 Removing: /var/run/dpdk/spdk_pid97893 00:24:52.946 Removing: /var/run/dpdk/spdk_pid98593 00:24:52.946 Removing: /var/run/dpdk/spdk_pid98624 00:24:52.946 Removing: /var/run/dpdk/spdk_pid98659 00:24:52.946 Removing: /var/run/dpdk/spdk_pid98910 00:24:52.946 Removing: /var/run/dpdk/spdk_pid98945 00:24:53.205 Removing: /var/run/dpdk/spdk_pid98975 00:24:53.205 Removing: /var/run/dpdk/spdk_pid99445 00:24:53.205 Removing: /var/run/dpdk/spdk_pid99450 00:24:53.205 Removing: /var/run/dpdk/spdk_pid99696 00:24:53.205 Removing: /var/run/dpdk/spdk_pid99812 00:24:53.205 Removing: /var/run/dpdk/spdk_pid99823 00:24:53.205 Clean 00:24:53.205 00:40:39 -- common/autotest_common.sh@1451 -- # return 0 00:24:53.205 00:40:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:24:53.205 00:40:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:53.205 00:40:39 -- common/autotest_common.sh@10 -- # set +x 00:24:53.205 00:40:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:24:53.205 00:40:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:53.205 00:40:39 -- common/autotest_common.sh@10 -- # set +x 00:24:53.205 00:40:39 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:53.205 00:40:39 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:53.205 00:40:39 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:53.205 00:40:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:24:53.205 00:40:39 -- spdk/autotest.sh@394 -- # hostname 00:24:53.205 00:40:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:53.486 geninfo: WARNING: invalid characters removed from testname! 00:25:15.431 00:41:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:18.717 00:41:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:20.618 00:41:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:23.148 00:41:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:25.679 00:41:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:28.214 00:41:13 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:30.748 00:41:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:30.748 00:41:16 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:25:30.748 00:41:16 -- common/autotest_common.sh@1681 -- $ lcov --version 00:25:30.748 00:41:16 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:25:30.748 00:41:16 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:25:30.748 00:41:16 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:25:30.748 00:41:16 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:25:30.748 00:41:16 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:25:30.748 00:41:16 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:30.748 00:41:16 -- scripts/common.sh@336 -- $ read -ra ver1 00:25:30.748 00:41:16 -- scripts/common.sh@337 -- $ IFS=.-: 00:25:30.748 00:41:16 -- scripts/common.sh@337 -- $ read -ra ver2 00:25:30.748 00:41:16 -- scripts/common.sh@338 -- $ local 'op=<' 00:25:30.748 00:41:16 -- scripts/common.sh@340 -- $ ver1_l=2 00:25:30.748 00:41:16 -- scripts/common.sh@341 -- $ ver2_l=1 00:25:30.748 00:41:16 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:25:30.748 00:41:16 -- scripts/common.sh@344 -- $ case "$op" in 00:25:30.748 00:41:16 -- scripts/common.sh@345 -- $ : 1 00:25:30.748 00:41:16 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:25:30.748 00:41:16 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.748 00:41:16 -- scripts/common.sh@365 -- $ decimal 1 00:25:30.748 00:41:16 -- scripts/common.sh@353 -- $ local d=1 00:25:30.748 00:41:16 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:30.748 00:41:16 -- scripts/common.sh@355 -- $ echo 1 00:25:30.748 00:41:16 -- scripts/common.sh@365 -- $ ver1[v]=1 00:25:30.748 00:41:16 -- scripts/common.sh@366 -- $ decimal 2 00:25:30.748 00:41:16 -- scripts/common.sh@353 -- $ local d=2 00:25:30.748 00:41:16 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:30.748 00:41:16 -- scripts/common.sh@355 -- $ echo 2 00:25:30.748 00:41:16 -- scripts/common.sh@366 -- $ ver2[v]=2 00:25:30.748 00:41:16 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:25:30.748 00:41:16 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:25:30.748 00:41:16 -- scripts/common.sh@368 -- $ return 0 00:25:30.748 00:41:16 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.748 00:41:16 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:25:30.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.748 --rc genhtml_branch_coverage=1 00:25:30.748 --rc genhtml_function_coverage=1 00:25:30.748 --rc genhtml_legend=1 00:25:30.748 --rc geninfo_all_blocks=1 00:25:30.748 --rc geninfo_unexecuted_blocks=1 00:25:30.748 00:25:30.748 ' 00:25:30.748 00:41:16 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:25:30.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.748 --rc genhtml_branch_coverage=1 00:25:30.748 --rc genhtml_function_coverage=1 00:25:30.748 --rc genhtml_legend=1 00:25:30.748 --rc geninfo_all_blocks=1 00:25:30.748 --rc geninfo_unexecuted_blocks=1 00:25:30.748 00:25:30.748 ' 00:25:30.748 00:41:16 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:25:30.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.748 --rc genhtml_branch_coverage=1 00:25:30.748 --rc genhtml_function_coverage=1 00:25:30.748 --rc genhtml_legend=1 00:25:30.748 --rc geninfo_all_blocks=1 00:25:30.748 --rc geninfo_unexecuted_blocks=1 00:25:30.748 00:25:30.748 ' 00:25:30.748 00:41:16 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:25:30.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.748 --rc genhtml_branch_coverage=1 00:25:30.748 --rc genhtml_function_coverage=1 00:25:30.748 --rc genhtml_legend=1 00:25:30.748 --rc geninfo_all_blocks=1 00:25:30.748 --rc geninfo_unexecuted_blocks=1 00:25:30.748 00:25:30.748 ' 00:25:30.748 00:41:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:30.748 00:41:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:30.748 00:41:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:30.748 00:41:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.748 00:41:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.748 00:41:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.748 00:41:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.748 00:41:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.748 00:41:16 -- paths/export.sh@5 -- $ export PATH 00:25:30.748 00:41:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.748 00:41:16 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:30.748 00:41:16 -- common/autobuild_common.sh@479 -- $ date +%s 00:25:30.748 00:41:16 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1734396076.XXXXXX 00:25:30.748 00:41:16 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1734396076.Y9aUab 00:25:30.748 00:41:16 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:25:30.748 00:41:16 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:25:30.748 00:41:16 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:30.748 00:41:16 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:30.748 00:41:16 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:30.749 00:41:16 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:30.749 00:41:16 -- common/autobuild_common.sh@495 -- $ get_config_params 00:25:30.749 00:41:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:30.749 00:41:16 -- common/autotest_common.sh@10 -- $ set +x 00:25:30.749 00:41:16 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:30.749 00:41:16 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:25:30.749 00:41:16 -- pm/common@17 -- $ local monitor 00:25:30.749 00:41:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:30.749 00:41:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:30.749 00:41:16 -- pm/common@25 -- $ sleep 1 00:25:30.749 00:41:16 -- pm/common@21 -- $ date +%s 00:25:30.749 00:41:16 -- pm/common@21 -- $ date +%s 00:25:30.749 00:41:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734396076 00:25:30.749 00:41:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1734396076 00:25:30.749 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734396076_collect-cpu-load.pm.log 00:25:30.749 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1734396076_collect-vmstat.pm.log 00:25:31.686 00:41:17 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:25:31.686 00:41:17 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:25:31.686 00:41:17 -- spdk/autopackage.sh@14 -- $ timing_finish 00:25:31.686 00:41:17 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:31.686 00:41:17 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:31.686 00:41:17 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:31.686 00:41:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:31.686 00:41:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:31.686 00:41:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:31.686 00:41:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:31.686 00:41:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:31.686 00:41:17 -- pm/common@44 -- $ pid=101607 00:25:31.686 00:41:17 -- pm/common@50 -- $ kill -TERM 101607 00:25:31.686 00:41:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:31.686 00:41:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:31.686 00:41:17 -- pm/common@44 -- $ pid=101609 00:25:31.686 00:41:17 -- pm/common@50 -- $ kill -TERM 101609 00:25:31.686 + [[ -n 5991 ]] 00:25:31.686 + sudo kill 5991 00:25:31.696 [Pipeline] } 00:25:31.711 [Pipeline] // timeout 00:25:31.717 [Pipeline] } 00:25:31.732 [Pipeline] // stage 00:25:31.738 [Pipeline] } 00:25:31.752 [Pipeline] // catchError 00:25:31.762 [Pipeline] stage 00:25:31.764 [Pipeline] { (Stop VM) 00:25:31.778 [Pipeline] sh 00:25:32.059 + vagrant halt 00:25:34.595 ==> default: Halting domain... 00:25:41.176 [Pipeline] sh 00:25:41.458 + vagrant destroy -f 00:25:44.006 ==> default: Removing domain... 00:25:44.288 [Pipeline] sh 00:25:44.569 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:44.578 [Pipeline] } 00:25:44.593 [Pipeline] // stage 00:25:44.598 [Pipeline] } 00:25:44.612 [Pipeline] // dir 00:25:44.617 [Pipeline] } 00:25:44.631 [Pipeline] // wrap 00:25:44.638 [Pipeline] } 00:25:44.651 [Pipeline] // catchError 00:25:44.660 [Pipeline] stage 00:25:44.662 [Pipeline] { (Epilogue) 00:25:44.675 [Pipeline] sh 00:25:44.956 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:50.238 [Pipeline] catchError 00:25:50.240 [Pipeline] { 00:25:50.254 [Pipeline] sh 00:25:50.536 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:50.795 Artifacts sizes are good 00:25:50.804 [Pipeline] } 00:25:50.817 [Pipeline] // catchError 00:25:50.828 [Pipeline] archiveArtifacts 00:25:50.835 Archiving artifacts 00:25:50.955 [Pipeline] cleanWs 00:25:50.967 [WS-CLEANUP] Deleting project workspace... 00:25:50.967 [WS-CLEANUP] Deferred wipeout is used... 00:25:50.973 [WS-CLEANUP] done 00:25:50.975 [Pipeline] } 00:25:50.990 [Pipeline] // stage 00:25:50.996 [Pipeline] } 00:25:51.009 [Pipeline] // node 00:25:51.015 [Pipeline] End of Pipeline 00:25:51.067 Finished: SUCCESS